🚀 Deep Learning Odyssey¶
🪖 Special Forces:
🐒 Jeyakumar Sriram (p2214618)
🦧 Shawn Lim Jun Jie (p2239745)
📚 Class: DAAA/FT/2A/01
🔍 Module Code: ST1504
🌟 Mission Brief¶
Embark on a transformative journey as we engineer a marvelous Generative Adversarial Network (GAN) to breathe life into images from the renowned CIFAR-10 dataset. Our mission is to generate realistic and twisty pixels.
🎨 Artistry in AI¶
Generative Adversarial Networks (GANs) are a tested architecture, able to craft images that blur the line between reality and imagination with finesse and sophistication.
🌌 Exploring the Data Cosmos¶
CIFAR-10, a dataset of 60,000 32x32 color images spanning 10 distinct classes. Our objective is to create a GAN monster to produce images that match the visual diversity of the authentic CIFAR-10 dataset.
🔬 Methodology¶
Harnessing the power of PyTorch, we'll architect our GAN to wield a generator that crafts indistinguishable images, while a discriminator sharpens its ability to discern authentic from generated.
🚀 Project Blueprint¶
- 🔧 Imports & Configuration
- 🔍 In-Depth Research
- 🛠️ Engineering Marvels: Feature Engineering
- 🗺️ Navigating the CIFAR-10 Landscape
- 📊 Metrics: Precision and Creativity
- 🤖 Beep Boop! GAN Comes to Life
- 🎯 Engineering our Model for Excellence
- 🔍 Critical Appraisal: Evaluating Our Creation
- 🔚 Conclusion: Bridging Creativity and Performance
📚 References¶
Betzalel, E. et al. (2022) ‘A Study on the Evaluation of Generative Models’, arXiv [cs.LG]. Available at: http://arxiv.org/abs/2206.10935.
Borji, A. (2021) ‘Pros and Cons of GAN Evaluation Measures: New Developments’, arXiv [cs.LG]. Available at: http://arxiv.org/abs/2103.09396.
Gandhi, R. (2018). Generative Adversarial Networks — Explained. [online] Medium. Available at: https://towardsdatascience.com/generative-adversarial-networks-explained-34472718707a.
Ghosh, B. et al. (2020) ‘An Empirical Analysis of Generative Adversarial Network Training Times with Varying Batch Sizes’, in 2020 11th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), pp. 0643–0648. doi: 10.1109/UEMCON51285.2020.9298092.
Goodfellow, I. J. et al. (2014) ‘Generative Adversarial Networks’, arXiv [stat.ML]. Available at: http://arxiv.org/abs/1406.2661.
Mack, D. (2019). A simple explanation of the Inception Score. [online] Octavian. Available at: https://medium.com/octavian-ai/a-simple-explanation-of-the-inception-score-372dff6a8c7a.
Mirza, M. and Osindero, S. (2014) ‘Conditional Generative Adversarial Nets’, arXiv [cs.LG]. Available at: http://arxiv.org/abs/1411.1784.
Odena, A., Olah, C., & Shlens, J. (2016). Conditional Image Synthesis With Auxiliary Classifier GANs, arXiv preprint arXiv:1610.09585.
Pleiss, G. et al. (2020) ‘Identifying Mislabeled Data using the Area Under the Margin Ranking’, arXiv [cs.LG]. Available at: http://arxiv.org/abs/2001.10528.
Radford, A., Metz, L. and Chintala, S. (2016) ‘Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks’, arXiv [cs.LG]. Available at: http://arxiv.org/abs/1511.06434.
Sato, N. and Iiduka, H. (2023) ‘Existence and Estimation of Critical Batch Size for Training Generative Adversarial Networks with Two Time-Scale Update Rule’, arXiv [cs.LG]. Available at: http://arxiv.org/abs/2201.11989.
Zhao, S. et al. (2020) ‘Differentiable Augmentation for Data-Efficient GAN Training’, arXiv [cs.CV]. Available at: http://arxiv.org/abs/2006.10738.
Zhang, Xinbin, An Improved Method of Identifying Mislabeled Data and the Mislabeled Data in MNIST and CIFAR-10 (November 30, 2017). Available at SSRN: https://ssrn.com/abstract=3080736 or http://dx.doi.org/10.2139/ssrn.3080736.
🔧 Imports & Configuration¶
We will be using these libraries in our project. To ensure availability we will import the crucial ones.
!pip install --quiet tensorflow matplotlib tqdm torchmetrics[image] kornia aum
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv [notice] A new release of pip is available: 23.3.1 -> 23.3.2 [notice] To update, run: python -m pip install --upgrade pip
import warnings; warnings.filterwarnings("ignore")
# PyTorch
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset
# TorchVision
from torchvision import transforms, datasets
# TorchGAN
from torchmetrics.image.fid import FrechetInceptionDistance
from torchmetrics.image.kid import KernelInceptionDistance
from aum import AUMCalculator, DatasetWithIndex
# Keras
from keras.datasets import cifar10
from keras.utils import to_categorical
# DiffAugment
import kornia.augmentation as K
# Warnings and Time
import time
import random
# Data Visualization
import matplotlib.pyplot as plt
import numpy as np
# Progress Bar
from tqdm import tqdm
# Device Configuration
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
2024-01-31 11:31:15.380151: I external/local_tsl/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used. 2024-01-31 11:31:15.421975: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2024-01-31 11:31:15.422013: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2024-01-31 11:31:15.423034: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2024-01-31 11:31:15.429291: I external/local_tsl/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used. 2024-01-31 11:31:15.429905: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2024-01-31 11:31:16.247563: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Constants¶
IMAGE_SIZE = 32 # image size, number of pixel, assuming square image (length = width)
CHANNELS = 3 # number of channels, which in our case is 3, because RGB
NUM_CLASS = 10 # number of classes, which in cifar-10, is 10
TRAIN_BATCH_SIZE = 512 # Adjust based on how powerful GPU is, 2048 for RTX 4090 but 512 for P100
CLASSES = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
Utility Code¶
def plot_losses(epochs, models):
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(12, 12))
epochs = list(range(epochs))
for model, name in models:
axes[0, 0].plot(epochs, model.disc_scores, marker='o', linestyle='-', label=f'{name}')
axes[0, 0].set_title('Discriminator Loss over Epochs/Steps')
axes[0, 0].set_xlabel('Epochs/Steps')
axes[0, 0].set_ylabel('Loss')
axes[0, 0].legend()
axes[0, 0].grid(True)
for model, name in models:
axes[0, 1].plot(epochs, model.gen_scores, marker='x', linestyle='-', label=f'{name}')
axes[0, 1].set_title('Generator Loss over Epochs/Steps')
axes[0, 1].set_xlabel('Epochs/Steps')
axes[0, 1].set_ylabel('Loss')
axes[0, 1].legend()
axes[0, 1].grid(True)
for model, name in models:
axes[1, 0].plot(model.fid_epochs, model.kid_scores, marker='s', linestyle='-', label=f'{name}')
axes[1, 0].set_title('KID Score over Epochs/Steps')
axes[1, 0].set_xlabel('Epochs/Steps')
axes[1, 0].set_ylabel('KID Score')
axes[1, 0].legend()
axes[1, 0].grid(True)
for model, name in models:
axes[1, 1].plot(model.fid_epochs, model.fid_scores, marker='s', linestyle='-', label=f'{name}')
axes[1, 1].set_title('FID Score over Epochs/Steps')
axes[1, 1].set_xlabel('Epochs/Steps')
axes[1, 1].set_ylabel('FID Score')
axes[1, 1].legend()
axes[1, 1].grid(True)
plt.tight_layout()
plt.show()
plt.figure(figsize=(12, 6))
def visualize_images(images, title):
fig, axes = plt.subplots(1, len(images), figsize=(12, 4))
for i in range(len(images)):
axes[i].imshow(images[i].cpu().numpy())
axes[i].axis('off')
fig.suptitle(title)
plt.show()
🔍 In-Depth Research¶
The GAN Concept:¶
GAN stands for Generative Adversarial Networks. As the name suggests, it features two adversarial networks competing to complete a generative task.
The GAN architecture was first introduced in this paper by Ian Goodfellow and his colleages and has proven itself as a viable method for generative tasks.
The generator generates a set of fake images using some noise while the discriminator tries to differentiate the fake images from actuals images sampled from the dataset.The two models have two opposing objectives:
- Generator creates images and tries to fool the discriminater
- Discriminator will improve and prevent generator from fooling it

As shown in the diagram, the generator takes in noise as input and through a series of upsampling layers, turns the noise into something like a image. The discriminator on the other hand takes in both the generated images and some original images and tries to classify which one is fake and not.

Backpropagation is done twice. Once to update the discriminator and once to update the generator while backpropagating through the discriminator.
Challenges in Training GAN¶
This adversarial way of training presents unique challenges that are not easy to solve. During the training, we need to ensure the generator and discrminator are learning in a balanced way.
Discriminator Too Strong -> Generator will not even be able to start fooling it and thus won't learn much
Generator Too Strong -> Discriminator will not be able to provide useful information during backpropagation hindering improvement

Without a delicate balance the GAN will not produce good balance. Another common problem is Mode Collapse. This is a scenarios where the generator learns to produce one specific type of image very well and fools the discriminator consistently, effectively trading output variety for a higher score.

Another problem could be vanishing gradients. When there is vanishing gradients, a discrminator might be less sensitive to the generator's output and provide weak feedback. Since the generator's update backprogates through the discrminator, vanishing gradient could result in the generator learning little as the gradient's are not as useful by the time they reach it. This can be mitigated by using normalisation in our model architecture.
The CIFAR-10 Story¶
CIFAR-10 stands for the "Canadian Institute for Advanced Research - 10 classes" and was created to facilitate research in computer vision and machine learning. It contains 60,000 labelled 32x32 images from 10 classes which are plane, car, bird, cat, deer, dog, frog, horse, ship, and truck. The dataset has a train-test split of 50,000-10,000.
While it is a popular dataset, it does have its problems:
Low Image Resolution:
- The images in CIFAR-10 are relatively small, with dimensions of 32x32 pixels. This low resolution can make it challenging for models to capture fine-grained details in the images.
Limited Diversity:
- The dataset consists of 60,000 images across 10 classes, and each class contains only 6,000 images. The limited number of samples per class may not be sufficient for training complex models.
Mislabelling:
- Though the images were all labelled by humans, due to some human error there are some mislabelled images. A paper published by Mr Zhang from Beijing University of Posts and Telecommunications found 118 images in CIFAR-10 being mislabelled. After reclassifying them, it was observed that model performance improved.
In lieu of these limitations, a highly complex model may not always be the best choice. We will start of simple and focus on improving the training process and model architecture rather than just increasing model complexity.
🛠️ Engineering Marvels: Feature Engineering¶
Scaling¶
One of the important preprocessing that has to be done is to scale the pixel values for the training images to [-1, 1]. This is because we will be using the tanh activation function for the generator model which ranges from -1 to 1. So to ensure that the real and fake images have the same range, we will need to scale the real images to [-1, 1]. This can also help the model converge faster
This will be the formula for performing the transformation:
$$X_{new} = \frac{X_{old}}{127.5} - 1$$One Hot Encoding¶
As we need to labeled images, and since the labelling are in nominal, we will do one hot encoding on the labels
Batch Size¶
In the context of training GANs batch size is one of the most important hyperparameters. It defines how many real images are being passed into the GAN at any one time. Too small batch size might result in high variability in gradient estimates and lead to instability while training. While too large of a batch size might run into memory requirements.

We found an analysis of GANs with varying batch sizes in this paper where they analysed the number of step $N$ needed to reach a lower FID score across varying batch sizes. They tested DCGAN, WGAN-GP and BigGAN. Their results for DCGANs on the LSUN-Bedroom dataset seems to suggests that bigger batch sizes may result in faster improvement. Although the dataset is different from our, this provides a good starting point for us to begin finding a optimal batch size for our model.
Handling Mislabelling¶
As mentioned in a previous section, CIFAR-10 containes quite some mislabelled data. Even during EDA I had observed a few. But in this feature engineering section, I would like to identify the mislabelled data and reclassify them into their right category.
Methodology
- Finding Mislablled Data
- Reclassfying
To train our ResNet model used to classify the mislabelled images we will temporarily use the torch version of Cifar10 but later we will switch back to Tensorflow.
To start of I will first get the dataset, perform a train val split on the training data. This is done in accordance with the methods outline in the research paper.
VALID_SIZE = 5000
TRAIN_BATCH_SIZE = 64
VALID_BATCH_SIZE = 64
LEARNING_RATE = 0.1
MOMENTUM = 0.9
EPOCHS = 32
CSV_DIR = './csv'
train_transforms = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
])
test_transforms = transforms.Compose([
transforms.ToTensor(),
])
train_dataset_torch = datasets.CIFAR10('test',
train=True,
transform=train_transforms,
download=True)
val_dataset_torch = datasets.CIFAR10('test', train=True, transform=test_transforms)
test_dataset_torch = datasets.CIFAR10('test', train=False, transform=test_transforms)
# do a shuffle and then slice the training data to have training and validation
indices = torch.randperm(len(train_dataset_torch))
train_indices = indices[:len(indices) - VALID_SIZE]
valid_indices = indices[len(indices) - VALID_SIZE:]
train_data = torch.utils.data.Subset(train_dataset_torch, train_indices)
val_data = torch.utils.data.Subset(val_dataset_torch, valid_indices)
# use AUM's custom dataset
train_set = DatasetWithIndex(train_data)
val_set = DatasetWithIndex(val_data)
test_set = DatasetWithIndex(test_dataset_torch)
val_loader = DataLoader(val_set,
batch_size=64,
shuffle=False,
pin_memory=(torch.cuda.is_available()))
test_loader = DataLoader(test_set,
batch_size=64,
shuffle=False,
pin_memory=(torch.cuda.is_available()))
Files already downloaded and verified
Here, we will create the model, instantiate the a data structue to hold the AUM, and write functions to evaluate AUM of each datapoint using the model.
# Reference: https://github.com/asappresearch/aum/blob/master/examples/cifar100/train.py
from torchvision.models import resnet34
class AverageMeter(object):
def __init__(self):
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
def train_step(metrics, aum_calculator, batch_step, num_batches,
batch, epoch, num_epochs, model, optimizer, device, calculate_aum):
start = time.time()
model.train()
with torch.enable_grad():
optimizer.zero_grad()
# if we want to calculate the AUM, then there will be another value to unpack
if calculate_aum:
input, target, sample_ids = batch
else:
input, target = batch
input = input.to(device)
target = target.to(device)
# Compute output
output = model(input)
loss = F.cross_entropy(output, target)
# Compute gradient and optimize
loss.backward()
optimizer.step()
# Measure accuracy & record loss
end = time.time()
batch_size = target.size(0)
_, pred = output.data.cpu().topk(1, dim=1)
error = torch.ne(pred.squeeze(), target.cpu()).float().sum().item() / batch_size
metrics['error'].update(error, batch_size)
metrics['loss'].update(loss.item(), batch_size)
metrics['batch_time'].update(end - start)
# Update AUM
if calculate_aum:
aum_calculator.update(output, target, sample_ids.tolist())
# log to console
if (batch_step + 1) % 100 == 0:
results = '\t'.join([
'TRAIN',
f'Epoch: [{epoch}/{num_epochs}]',
f'Batch: [{batch_step}/{num_batches}]',
f'Time: {metrics["batch_time"].val:.3f} ({metrics["batch_time"].avg:.3f})',
f'Loss: {metrics["loss"].val:.3f} ({metrics["loss"].avg:.3f})',
f'Error: {metrics["error"].val:.3f} ({metrics["error"].avg:.3f})',
])
print(results)
def eval_step(metrics, batch, model, device):
start = time.time()
model.eval()
with torch.no_grad():
input, target, sample_ids = batch
input = input.to(device)
target = target.to(device)
# Compute output
output = model(input)
loss = F.cross_entropy(output, target)
# Measure accuracy & record loss
end = time.time()
batch_size = target.size(0)
_, pred = output.data.cpu().topk(1, dim=1)
error = torch.ne(pred.squeeze(), target.cpu()).float().sum().item() / batch_size
metrics['error'].update(error, batch_size)
metrics['loss'].update(loss.item(), batch_size)
metrics['batch_time'].update(end - start)
# Load Model
model = resnet34(num_classes=10)
model = model.to(device)
num_params = sum(x.numel() for x in model.parameters() if x.requires_grad)
print(model)
f'Number of parameters: {num_params}'
ResNet(
(conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(2): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer2): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(2): BasicBlock(
(conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(3): BasicBlock(
(conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer3): Sequential(
(0): BasicBlock(
(conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(2): BasicBlock(
(conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(3): BasicBlock(
(conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(4): BasicBlock(
(conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(5): BasicBlock(
(conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer4): Sequential(
(0): BasicBlock(
(conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(2): BasicBlock(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
(fc): Linear(in_features=512, out_features=10, bias=True)
)
'Number of parameters: 21289802'
Once modelling is done, we will now calculate the AUM for each datapoint using the ResNet model and save the AUM values to a CSV. We are saving it to a CSV because that is how the AUM libary is implemented.
# Reference: https://github.com/asappresearch/aum/blob/master/examples/cifar100/train.py
import math
# Create optimizer & lr scheduler
parameters = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(parameters,
lr=LEARNING_RATE,
momentum=MOMENTUM,
nesterov=True)
milestones = [0.5 * EPOCHS, 0.75 * EPOCHS]
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=milestones, gamma=0.1)
# Keep track of AUM
aum_calculator = AUMCalculator(CSV_DIR, compressed=True)
# Keep track of things
best_error = math.inf
# keep the best model
best_model = None
for epoch in range(EPOCHS):
train_loader = DataLoader(train_set,
batch_size=TRAIN_BATCH_SIZE,
shuffle=True,
pin_memory=(torch.cuda.is_available()),
num_workers=0)
# set up the metrics
train_metrics = {
'loss': AverageMeter(),
'error': AverageMeter(),
'batch_time': AverageMeter()
}
val_metrics = {
'loss': AverageMeter(),
'error': AverageMeter(),
'batch_time': AverageMeter()
}
num_batches = len(train_loader)
for batch_step, batch in enumerate(train_loader):
train_step(train_metrics, aum_calculator,
batch_step, num_batches, batch, epoch, EPOCHS, model,
optimizer, device, True)
scheduler.step()
num_batches = len(val_loader)
for batch_step, batch in enumerate(val_loader):
eval_step(val_metrics, batch, model, device)
# show the validation data metrics
results = '\t'.join([
'VAL',
f'Epoch: [{epoch}/{EPOCHS}]',
f'Time: {val_metrics["batch_time"].val:.3f} ({val_metrics["batch_time"].avg:.3f})',
f'Loss: {val_metrics["loss"].val:.3f} ({val_metrics["loss"].avg:.3f})',
f'Error: {val_metrics["error"].val:.3f} ({val_metrics["error"].avg:.3f})',
])
print(results)
# get the best model
if val_metrics['error'].avg < best_error:
best_error = val_metrics['error'].avg
best_model = model
# Finalize aum calculator
aum_calculator.finalize(CSV_DIR)
TRAIN Epoch: [0/32] Batch: [99/704] Time: 0.023 (0.025) Loss: 2.709 (3.679) Error: 0.859 (0.880) TRAIN Epoch: [0/32] Batch: [199/704] Time: 0.025 (0.025) Loss: 2.246 (2.969) Error: 0.891 (0.861) TRAIN Epoch: [0/32] Batch: [299/704] Time: 0.014 (0.026) Loss: 2.082 (2.686) Error: 0.844 (0.844) TRAIN Epoch: [0/32] Batch: [399/704] Time: 0.021 (0.026) Loss: 1.936 (2.526) Error: 0.672 (0.832) TRAIN Epoch: [0/32] Batch: [499/704] Time: 0.033 (0.026) Loss: 2.144 (2.418) Error: 0.797 (0.819) TRAIN Epoch: [0/32] Batch: [599/704] Time: 0.012 (0.026) Loss: 1.806 (2.337) Error: 0.625 (0.807) TRAIN Epoch: [0/32] Batch: [699/704] Time: 0.027 (0.026) Loss: 1.935 (2.271) Error: 0.797 (0.795) VAL Epoch: [0/32] Time: 0.004 (0.016) Loss: 2.255 (2.237) Error: 0.750 (0.756) TRAIN Epoch: [1/32] Batch: [99/704] Time: 0.012 (0.027) Loss: 1.736 (1.878) Error: 0.656 (0.722) TRAIN Epoch: [1/32] Batch: [199/704] Time: 0.024 (0.027) Loss: 1.730 (1.851) Error: 0.688 (0.711) TRAIN Epoch: [1/32] Batch: [299/704] Time: 0.018 (0.027) Loss: 1.827 (1.825) Error: 0.625 (0.701) TRAIN Epoch: [1/32] Batch: [399/704] Time: 0.032 (0.027) Loss: 1.775 (1.809) Error: 0.734 (0.694) TRAIN Epoch: [1/32] Batch: [499/704] Time: 0.018 (0.026) Loss: 1.560 (1.788) Error: 0.641 (0.686) TRAIN Epoch: [1/32] Batch: [599/704] Time: 0.025 (0.026) Loss: 1.682 (1.768) Error: 0.594 (0.676) TRAIN Epoch: [1/32] Batch: [699/704] Time: 0.025 (0.026) Loss: 1.657 (1.755) Error: 0.656 (0.669) VAL Epoch: [1/32] Time: 0.006 (0.016) Loss: 1.720 (1.742) Error: 0.750 (0.627) TRAIN Epoch: [2/32] Batch: [99/704] Time: 0.019 (0.026) Loss: 1.648 (1.663) Error: 0.531 (0.626) TRAIN Epoch: [2/32] Batch: [199/704] Time: 0.024 (0.026) Loss: 1.815 (1.638) Error: 0.625 (0.613) TRAIN Epoch: [2/32] Batch: [299/704] Time: 0.030 (0.025) Loss: 1.546 (1.623) Error: 0.500 (0.607) TRAIN Epoch: [2/32] Batch: [399/704] Time: 0.023 (0.025) Loss: 1.581 (1.608) Error: 0.531 (0.599) TRAIN Epoch: [2/32] Batch: [499/704] Time: 0.025 (0.025) Loss: 1.540 (1.597) Error: 0.641 (0.594) TRAIN Epoch: [2/32] Batch: [599/704] Time: 0.027 (0.025) Loss: 1.698 (1.584) Error: 0.609 (0.589) TRAIN Epoch: [2/32] Batch: [699/704] Time: 0.020 (0.025) Loss: 1.443 (1.577) Error: 0.469 (0.586) VAL Epoch: [2/32] Time: 0.004 (0.012) Loss: 15.587 (3.425) Error: 0.875 (0.676) TRAIN Epoch: [3/32] Batch: [99/704] Time: 0.025 (0.029) Loss: 1.621 (1.565) Error: 0.609 (0.587) TRAIN Epoch: [3/32] Batch: [199/704] Time: 0.024 (0.027) Loss: 1.379 (1.538) Error: 0.500 (0.573) TRAIN Epoch: [3/32] Batch: [299/704] Time: 0.025 (0.027) Loss: 1.270 (1.516) Error: 0.469 (0.562) TRAIN Epoch: [3/32] Batch: [399/704] Time: 0.025 (0.027) Loss: 1.496 (1.502) Error: 0.562 (0.557) TRAIN Epoch: [3/32] Batch: [499/704] Time: 0.025 (0.027) Loss: 1.390 (1.493) Error: 0.578 (0.552) TRAIN Epoch: [3/32] Batch: [599/704] Time: 0.012 (0.027) Loss: 1.505 (1.483) Error: 0.500 (0.546) TRAIN Epoch: [3/32] Batch: [699/704] Time: 0.014 (0.027) Loss: 1.538 (1.476) Error: 0.641 (0.542) VAL Epoch: [3/32] Time: 0.004 (0.011) Loss: 2.241 (1.524) Error: 0.625 (0.542) TRAIN Epoch: [4/32] Batch: [99/704] Time: 0.023 (0.026) Loss: 1.695 (1.412) Error: 0.578 (0.523) TRAIN Epoch: [4/32] Batch: [199/704] Time: 0.024 (0.026) Loss: 1.489 (1.410) Error: 0.516 (0.517) TRAIN Epoch: [4/32] Batch: [299/704] Time: 0.018 (0.027) Loss: 1.465 (1.403) Error: 0.453 (0.513) TRAIN Epoch: [4/32] Batch: [399/704] Time: 0.036 (0.027) Loss: 1.432 (1.394) Error: 0.484 (0.510) TRAIN Epoch: [4/32] Batch: [499/704] Time: 0.054 (0.027) Loss: 1.497 (1.384) Error: 0.531 (0.505) TRAIN Epoch: [4/32] Batch: [599/704] Time: 0.040 (0.027) Loss: 1.503 (1.376) Error: 0.562 (0.503) TRAIN Epoch: [4/32] Batch: [699/704] Time: 0.021 (0.027) Loss: 1.297 (1.368) Error: 0.500 (0.499) VAL Epoch: [4/32] Time: 0.069 (0.013) Loss: 1.476 (1.347) Error: 0.625 (0.470) TRAIN Epoch: [5/32] Batch: [99/704] Time: 0.013 (0.026) Loss: 1.364 (1.322) Error: 0.453 (0.472) TRAIN Epoch: [5/32] Batch: [199/704] Time: 0.010 (0.025) Loss: 1.224 (1.319) Error: 0.453 (0.474) TRAIN Epoch: [5/32] Batch: [299/704] Time: 0.009 (0.025) Loss: 1.375 (1.302) Error: 0.500 (0.469) TRAIN Epoch: [5/32] Batch: [399/704] Time: 0.016 (0.025) Loss: 1.154 (1.294) Error: 0.484 (0.466) TRAIN Epoch: [5/32] Batch: [499/704] Time: 0.026 (0.024) Loss: 1.211 (1.283) Error: 0.391 (0.460) TRAIN Epoch: [5/32] Batch: [599/704] Time: 0.024 (0.024) Loss: 1.315 (1.276) Error: 0.500 (0.457) TRAIN Epoch: [5/32] Batch: [699/704] Time: 0.024 (0.025) Loss: 1.442 (1.272) Error: 0.422 (0.456) VAL Epoch: [5/32] Time: 0.004 (0.013) Loss: 1.790 (2.110) Error: 0.625 (0.527) TRAIN Epoch: [6/32] Batch: [99/704] Time: 0.023 (0.028) Loss: 1.162 (1.228) Error: 0.391 (0.443) TRAIN Epoch: [6/32] Batch: [199/704] Time: 0.014 (0.026) Loss: 1.282 (1.222) Error: 0.500 (0.435) TRAIN Epoch: [6/32] Batch: [299/704] Time: 0.025 (0.026) Loss: 1.185 (1.210) Error: 0.453 (0.432) TRAIN Epoch: [6/32] Batch: [399/704] Time: 0.023 (0.026) Loss: 1.118 (1.200) Error: 0.422 (0.428) TRAIN Epoch: [6/32] Batch: [499/704] Time: 0.025 (0.026) Loss: 1.190 (1.195) Error: 0.422 (0.426) TRAIN Epoch: [6/32] Batch: [599/704] Time: 0.010 (0.026) Loss: 1.191 (1.192) Error: 0.438 (0.424) TRAIN Epoch: [6/32] Batch: [699/704] Time: 0.016 (0.026) Loss: 1.066 (1.187) Error: 0.328 (0.422) VAL Epoch: [6/32] Time: 0.004 (0.012) Loss: 1.459 (1.443) Error: 0.500 (0.460) TRAIN Epoch: [7/32] Batch: [99/704] Time: 0.010 (0.022) Loss: 0.976 (1.178) Error: 0.344 (0.419) TRAIN Epoch: [7/32] Batch: [199/704] Time: 0.057 (0.023) Loss: 1.387 (1.157) Error: 0.500 (0.412) TRAIN Epoch: [7/32] Batch: [299/704] Time: 0.047 (0.023) Loss: 1.037 (1.145) Error: 0.328 (0.407) TRAIN Epoch: [7/32] Batch: [399/704] Time: 0.025 (0.024) Loss: 1.103 (1.133) Error: 0.422 (0.402) TRAIN Epoch: [7/32] Batch: [499/704] Time: 0.053 (0.025) Loss: 1.204 (1.128) Error: 0.438 (0.401) TRAIN Epoch: [7/32] Batch: [599/704] Time: 0.023 (0.024) Loss: 1.080 (1.127) Error: 0.406 (0.401) TRAIN Epoch: [7/32] Batch: [699/704] Time: 0.018 (0.025) Loss: 1.165 (1.124) Error: 0.469 (0.398) VAL Epoch: [7/32] Time: 0.004 (0.017) Loss: 1.499 (1.266) Error: 0.500 (0.398) TRAIN Epoch: [8/32] Batch: [99/704] Time: 0.048 (0.025) Loss: 0.937 (1.083) Error: 0.328 (0.385) TRAIN Epoch: [8/32] Batch: [199/704] Time: 0.017 (0.023) Loss: 1.092 (1.077) Error: 0.375 (0.381) TRAIN Epoch: [8/32] Batch: [299/704] Time: 0.023 (0.024) Loss: 1.029 (1.066) Error: 0.438 (0.377) TRAIN Epoch: [8/32] Batch: [399/704] Time: 0.026 (0.024) Loss: 1.101 (1.057) Error: 0.312 (0.373) TRAIN Epoch: [8/32] Batch: [499/704] Time: 0.017 (0.025) Loss: 1.091 (1.057) Error: 0.391 (0.372) TRAIN Epoch: [8/32] Batch: [599/704] Time: 0.025 (0.025) Loss: 0.932 (1.057) Error: 0.391 (0.372) TRAIN Epoch: [8/32] Batch: [699/704] Time: 0.025 (0.025) Loss: 1.051 (1.057) Error: 0.359 (0.372) VAL Epoch: [8/32] Time: 0.077 (0.017) Loss: 1.391 (1.410) Error: 0.500 (0.418) TRAIN Epoch: [9/32] Batch: [99/704] Time: 0.054 (0.027) Loss: 1.109 (1.063) Error: 0.344 (0.373) TRAIN Epoch: [9/32] Batch: [199/704] Time: 0.030 (0.026) Loss: 0.954 (1.043) Error: 0.375 (0.365) TRAIN Epoch: [9/32] Batch: [299/704] Time: 0.013 (0.027) Loss: 0.985 (1.050) Error: 0.359 (0.367) TRAIN Epoch: [9/32] Batch: [399/704] Time: 0.023 (0.027) Loss: 0.940 (1.041) Error: 0.297 (0.365) TRAIN Epoch: [9/32] Batch: [499/704] Time: 0.035 (0.027) Loss: 0.813 (1.032) Error: 0.312 (0.363) TRAIN Epoch: [9/32] Batch: [599/704] Time: 0.023 (0.027) Loss: 0.926 (1.022) Error: 0.312 (0.359) TRAIN Epoch: [9/32] Batch: [699/704] Time: 0.032 (0.027) Loss: 1.022 (1.019) Error: 0.359 (0.358) VAL Epoch: [9/32] Time: 0.005 (0.012) Loss: 1.058 (1.148) Error: 0.375 (0.390) TRAIN Epoch: [10/32] Batch: [99/704] Time: 0.025 (0.030) Loss: 1.236 (0.978) Error: 0.359 (0.341) TRAIN Epoch: [10/32] Batch: [199/704] Time: 0.035 (0.029) Loss: 1.169 (0.967) Error: 0.469 (0.338) TRAIN Epoch: [10/32] Batch: [299/704] Time: 0.017 (0.029) Loss: 0.835 (0.970) Error: 0.344 (0.338) TRAIN Epoch: [10/32] Batch: [399/704] Time: 0.024 (0.029) Loss: 1.126 (0.973) Error: 0.359 (0.338) TRAIN Epoch: [10/32] Batch: [499/704] Time: 0.014 (0.029) Loss: 0.757 (0.972) Error: 0.297 (0.338) TRAIN Epoch: [10/32] Batch: [599/704] Time: 0.013 (0.029) Loss: 1.060 (0.969) Error: 0.406 (0.337) TRAIN Epoch: [10/32] Batch: [699/704] Time: 0.020 (0.029) Loss: 1.027 (0.963) Error: 0.297 (0.335) VAL Epoch: [10/32] Time: 0.004 (0.012) Loss: 0.832 (0.970) Error: 0.375 (0.329) TRAIN Epoch: [11/32] Batch: [99/704] Time: 0.025 (0.029) Loss: 0.937 (0.892) Error: 0.312 (0.310) TRAIN Epoch: [11/32] Batch: [199/704] Time: 0.028 (0.030) Loss: 1.209 (0.914) Error: 0.438 (0.315) TRAIN Epoch: [11/32] Batch: [299/704] Time: 0.028 (0.030) Loss: 1.020 (0.924) Error: 0.375 (0.320) TRAIN Epoch: [11/32] Batch: [399/704] Time: 0.036 (0.029) Loss: 0.857 (0.918) Error: 0.359 (0.319) TRAIN Epoch: [11/32] Batch: [499/704] Time: 0.025 (0.029) Loss: 0.961 (0.925) Error: 0.281 (0.321) TRAIN Epoch: [11/32] Batch: [599/704] Time: 0.048 (0.029) Loss: 1.018 (0.922) Error: 0.438 (0.320) TRAIN Epoch: [11/32] Batch: [699/704] Time: 0.039 (0.029) Loss: 0.926 (0.923) Error: 0.266 (0.320) VAL Epoch: [11/32] Time: 0.004 (0.012) Loss: 1.767 (1.039) Error: 0.250 (0.347) TRAIN Epoch: [12/32] Batch: [99/704] Time: 0.032 (0.026) Loss: 0.878 (0.872) Error: 0.344 (0.309) TRAIN Epoch: [12/32] Batch: [199/704] Time: 0.013 (0.026) Loss: 0.763 (0.894) Error: 0.266 (0.313) TRAIN Epoch: [12/32] Batch: [299/704] Time: 0.015 (0.025) Loss: 0.581 (0.890) Error: 0.172 (0.312) TRAIN Epoch: [12/32] Batch: [399/704] Time: 0.032 (0.025) Loss: 0.940 (0.899) Error: 0.344 (0.315) TRAIN Epoch: [12/32] Batch: [499/704] Time: 0.037 (0.025) Loss: 0.952 (0.897) Error: 0.297 (0.314) TRAIN Epoch: [12/32] Batch: [599/704] Time: 0.025 (0.026) Loss: 1.014 (0.896) Error: 0.281 (0.312) TRAIN Epoch: [12/32] Batch: [699/704] Time: 0.010 (0.026) Loss: 0.863 (0.892) Error: 0.297 (0.311) VAL Epoch: [12/32] Time: 0.073 (0.012) Loss: 0.850 (1.256) Error: 0.125 (0.387) TRAIN Epoch: [13/32] Batch: [99/704] Time: 0.033 (0.026) Loss: 0.957 (0.923) Error: 0.297 (0.321) TRAIN Epoch: [13/32] Batch: [199/704] Time: 0.017 (0.028) Loss: 0.681 (0.896) Error: 0.219 (0.315) TRAIN Epoch: [13/32] Batch: [299/704] Time: 0.021 (0.028) Loss: 0.909 (0.885) Error: 0.375 (0.312) TRAIN Epoch: [13/32] Batch: [399/704] Time: 0.039 (0.028) Loss: 1.032 (0.878) Error: 0.391 (0.309) TRAIN Epoch: [13/32] Batch: [499/704] Time: 0.020 (0.028) Loss: 0.837 (0.872) Error: 0.312 (0.307) TRAIN Epoch: [13/32] Batch: [599/704] Time: 0.011 (0.028) Loss: 1.117 (0.868) Error: 0.375 (0.307) TRAIN Epoch: [13/32] Batch: [699/704] Time: 0.042 (0.028) Loss: 0.841 (0.868) Error: 0.266 (0.306) VAL Epoch: [13/32] Time: 0.004 (0.012) Loss: 0.656 (1.026) Error: 0.375 (0.308) TRAIN Epoch: [14/32] Batch: [99/704] Time: 0.023 (0.027) Loss: 0.778 (0.827) Error: 0.250 (0.284) TRAIN Epoch: [14/32] Batch: [199/704] Time: 0.025 (0.027) Loss: 0.688 (0.825) Error: 0.266 (0.285) TRAIN Epoch: [14/32] Batch: [299/704] Time: 0.030 (0.027) Loss: 0.947 (0.833) Error: 0.375 (0.288) TRAIN Epoch: [14/32] Batch: [399/704] Time: 0.042 (0.028) Loss: 1.097 (0.825) Error: 0.375 (0.285) TRAIN Epoch: [14/32] Batch: [499/704] Time: 0.025 (0.028) Loss: 0.954 (0.824) Error: 0.297 (0.285) TRAIN Epoch: [14/32] Batch: [599/704] Time: 0.025 (0.028) Loss: 0.693 (0.820) Error: 0.234 (0.285) TRAIN Epoch: [14/32] Batch: [699/704] Time: 0.034 (0.028) Loss: 0.657 (0.821) Error: 0.250 (0.286) VAL Epoch: [14/32] Time: 0.004 (0.015) Loss: 0.688 (1.068) Error: 0.375 (0.312) TRAIN Epoch: [15/32] Batch: [99/704] Time: 0.025 (0.029) Loss: 0.640 (0.800) Error: 0.203 (0.274) TRAIN Epoch: [15/32] Batch: [199/704] Time: 0.025 (0.028) Loss: 0.701 (0.800) Error: 0.266 (0.273) TRAIN Epoch: [15/32] Batch: [299/704] Time: 0.025 (0.029) Loss: 0.579 (0.804) Error: 0.172 (0.276) TRAIN Epoch: [15/32] Batch: [399/704] Time: 0.026 (0.028) Loss: 0.863 (0.802) Error: 0.266 (0.276) TRAIN Epoch: [15/32] Batch: [499/704] Time: 0.023 (0.029) Loss: 0.911 (0.804) Error: 0.297 (0.277) TRAIN Epoch: [15/32] Batch: [599/704] Time: 0.026 (0.028) Loss: 0.697 (0.808) Error: 0.266 (0.279) TRAIN Epoch: [15/32] Batch: [699/704] Time: 0.025 (0.028) Loss: 0.834 (0.804) Error: 0.312 (0.277) VAL Epoch: [15/32] Time: 0.004 (0.016) Loss: 0.795 (0.961) Error: 0.500 (0.294) TRAIN Epoch: [16/32] Batch: [99/704] Time: 0.025 (0.028) Loss: 0.803 (0.746) Error: 0.234 (0.260) TRAIN Epoch: [16/32] Batch: [199/704] Time: 0.021 (0.028) Loss: 0.817 (0.729) Error: 0.250 (0.252) TRAIN Epoch: [16/32] Batch: [299/704] Time: 0.025 (0.028) Loss: 0.472 (0.718) Error: 0.188 (0.248) TRAIN Epoch: [16/32] Batch: [399/704] Time: 0.026 (0.028) Loss: 0.710 (0.706) Error: 0.188 (0.244) TRAIN Epoch: [16/32] Batch: [499/704] Time: 0.032 (0.028) Loss: 0.711 (0.698) Error: 0.297 (0.242) TRAIN Epoch: [16/32] Batch: [599/704] Time: 0.024 (0.028) Loss: 0.610 (0.691) Error: 0.203 (0.240) TRAIN Epoch: [16/32] Batch: [699/704] Time: 0.025 (0.028) Loss: 0.650 (0.687) Error: 0.250 (0.239) VAL Epoch: [16/32] Time: 0.084 (0.015) Loss: 0.764 (0.725) Error: 0.375 (0.251) TRAIN Epoch: [17/32] Batch: [99/704] Time: 0.033 (0.028) Loss: 0.446 (0.650) Error: 0.156 (0.226) TRAIN Epoch: [17/32] Batch: [199/704] Time: 0.030 (0.028) Loss: 0.841 (0.657) Error: 0.281 (0.230) TRAIN Epoch: [17/32] Batch: [299/704] Time: 0.025 (0.029) Loss: 0.808 (0.650) Error: 0.297 (0.228) TRAIN Epoch: [17/32] Batch: [399/704] Time: 0.019 (0.030) Loss: 0.608 (0.651) Error: 0.219 (0.227) TRAIN Epoch: [17/32] Batch: [499/704] Time: 0.025 (0.030) Loss: 0.620 (0.651) Error: 0.250 (0.226) TRAIN Epoch: [17/32] Batch: [599/704] Time: 0.025 (0.030) Loss: 0.712 (0.651) Error: 0.266 (0.226) TRAIN Epoch: [17/32] Batch: [699/704] Time: 0.025 (0.030) Loss: 0.626 (0.652) Error: 0.219 (0.227) VAL Epoch: [17/32] Time: 0.004 (0.014) Loss: 0.762 (0.763) Error: 0.375 (0.246) TRAIN Epoch: [18/32] Batch: [99/704] Time: 0.049 (0.033) Loss: 0.452 (0.650) Error: 0.156 (0.221) TRAIN Epoch: [18/32] Batch: [199/704] Time: 0.030 (0.032) Loss: 0.616 (0.640) Error: 0.250 (0.223) TRAIN Epoch: [18/32] Batch: [299/704] Time: 0.029 (0.031) Loss: 0.586 (0.637) Error: 0.203 (0.222) TRAIN Epoch: [18/32] Batch: [399/704] Time: 0.025 (0.030) Loss: 0.595 (0.632) Error: 0.188 (0.220) TRAIN Epoch: [18/32] Batch: [499/704] Time: 0.027 (0.030) Loss: 0.451 (0.636) Error: 0.172 (0.221) TRAIN Epoch: [18/32] Batch: [599/704] Time: 0.025 (0.030) Loss: 0.605 (0.636) Error: 0.219 (0.221) TRAIN Epoch: [18/32] Batch: [699/704] Time: 0.036 (0.030) Loss: 0.418 (0.635) Error: 0.203 (0.221) VAL Epoch: [18/32] Time: 0.004 (0.019) Loss: 0.658 (0.796) Error: 0.375 (0.244) TRAIN Epoch: [19/32] Batch: [99/704] Time: 0.018 (0.029) Loss: 0.744 (0.629) Error: 0.219 (0.224) TRAIN Epoch: [19/32] Batch: [199/704] Time: 0.073 (0.033) Loss: 0.814 (0.626) Error: 0.281 (0.216) TRAIN Epoch: [19/32] Batch: [299/704] Time: 0.065 (0.030) Loss: 0.910 (0.629) Error: 0.328 (0.219) TRAIN Epoch: [19/32] Batch: [399/704] Time: 0.046 (0.029) Loss: 0.857 (0.627) Error: 0.234 (0.218) TRAIN Epoch: [19/32] Batch: [499/704] Time: 0.018 (0.028) Loss: 0.434 (0.626) Error: 0.172 (0.217) TRAIN Epoch: [19/32] Batch: [599/704] Time: 0.018 (0.028) Loss: 0.615 (0.627) Error: 0.234 (0.218) TRAIN Epoch: [19/32] Batch: [699/704] Time: 0.072 (0.028) Loss: 0.617 (0.627) Error: 0.203 (0.218) VAL Epoch: [19/32] Time: 0.004 (0.016) Loss: 0.644 (0.748) Error: 0.375 (0.242) TRAIN Epoch: [20/32] Batch: [99/704] Time: 0.010 (0.026) Loss: 0.905 (0.632) Error: 0.312 (0.221) TRAIN Epoch: [20/32] Batch: [199/704] Time: 0.059 (0.025) Loss: 0.872 (0.625) Error: 0.312 (0.220) TRAIN Epoch: [20/32] Batch: [299/704] Time: 0.056 (0.025) Loss: 0.691 (0.613) Error: 0.266 (0.216) TRAIN Epoch: [20/32] Batch: [399/704] Time: 0.024 (0.025) Loss: 0.694 (0.614) Error: 0.297 (0.215) TRAIN Epoch: [20/32] Batch: [499/704] Time: 0.041 (0.026) Loss: 0.756 (0.612) Error: 0.266 (0.215) TRAIN Epoch: [20/32] Batch: [599/704] Time: 0.010 (0.025) Loss: 0.688 (0.612) Error: 0.203 (0.215) TRAIN Epoch: [20/32] Batch: [699/704] Time: 0.072 (0.025) Loss: 0.582 (0.613) Error: 0.172 (0.214) VAL Epoch: [20/32] Time: 0.004 (0.016) Loss: 0.589 (0.827) Error: 0.250 (0.235) TRAIN Epoch: [21/32] Batch: [99/704] Time: 0.040 (0.023) Loss: 0.754 (0.593) Error: 0.250 (0.207) TRAIN Epoch: [21/32] Batch: [199/704] Time: 0.025 (0.024) Loss: 0.579 (0.606) Error: 0.156 (0.212) TRAIN Epoch: [21/32] Batch: [299/704] Time: 0.032 (0.024) Loss: 0.494 (0.608) Error: 0.172 (0.212) TRAIN Epoch: [21/32] Batch: [399/704] Time: 0.034 (0.024) Loss: 0.690 (0.611) Error: 0.234 (0.213) TRAIN Epoch: [21/32] Batch: [499/704] Time: 0.032 (0.025) Loss: 0.473 (0.609) Error: 0.156 (0.213) TRAIN Epoch: [21/32] Batch: [599/704] Time: 0.025 (0.025) Loss: 0.618 (0.607) Error: 0.234 (0.212) TRAIN Epoch: [21/32] Batch: [699/704] Time: 0.025 (0.025) Loss: 0.681 (0.606) Error: 0.250 (0.212) VAL Epoch: [21/32] Time: 0.004 (0.014) Loss: 0.572 (0.755) Error: 0.375 (0.238) TRAIN Epoch: [22/32] Batch: [99/704] Time: 0.026 (0.027) Loss: 0.617 (0.611) Error: 0.203 (0.210) TRAIN Epoch: [22/32] Batch: [199/704] Time: 0.025 (0.027) Loss: 0.772 (0.600) Error: 0.281 (0.209) TRAIN Epoch: [22/32] Batch: [299/704] Time: 0.028 (0.027) Loss: 0.782 (0.595) Error: 0.281 (0.208) TRAIN Epoch: [22/32] Batch: [399/704] Time: 0.025 (0.027) Loss: 0.583 (0.601) Error: 0.172 (0.211) TRAIN Epoch: [22/32] Batch: [499/704] Time: 0.037 (0.027) Loss: 0.467 (0.602) Error: 0.219 (0.211) TRAIN Epoch: [22/32] Batch: [599/704] Time: 0.032 (0.027) Loss: 0.549 (0.600) Error: 0.188 (0.211) TRAIN Epoch: [22/32] Batch: [699/704] Time: 0.025 (0.028) Loss: 0.834 (0.600) Error: 0.297 (0.210) VAL Epoch: [22/32] Time: 0.004 (0.015) Loss: 0.560 (0.758) Error: 0.375 (0.237) TRAIN Epoch: [23/32] Batch: [99/704] Time: 0.023 (0.024) Loss: 0.642 (0.590) Error: 0.297 (0.203) TRAIN Epoch: [23/32] Batch: [199/704] Time: 0.027 (0.025) Loss: 0.438 (0.591) Error: 0.156 (0.203) TRAIN Epoch: [23/32] Batch: [299/704] Time: 0.014 (0.024) Loss: 0.560 (0.592) Error: 0.172 (0.203) TRAIN Epoch: [23/32] Batch: [399/704] Time: 0.023 (0.024) Loss: 0.656 (0.591) Error: 0.250 (0.204) TRAIN Epoch: [23/32] Batch: [499/704] Time: 0.015 (0.023) Loss: 0.559 (0.593) Error: 0.250 (0.205) TRAIN Epoch: [23/32] Batch: [599/704] Time: 0.026 (0.023) Loss: 0.577 (0.592) Error: 0.172 (0.206) TRAIN Epoch: [23/32] Batch: [699/704] Time: 0.021 (0.024) Loss: 0.428 (0.594) Error: 0.188 (0.206) VAL Epoch: [23/32] Time: 0.004 (0.010) Loss: 0.425 (0.802) Error: 0.125 (0.235) TRAIN Epoch: [24/32] Batch: [99/704] Time: 0.025 (0.030) Loss: 0.543 (0.601) Error: 0.203 (0.209) TRAIN Epoch: [24/32] Batch: [199/704] Time: 0.016 (0.028) Loss: 0.669 (0.586) Error: 0.234 (0.208) TRAIN Epoch: [24/32] Batch: [299/704] Time: 0.025 (0.027) Loss: 0.590 (0.590) Error: 0.219 (0.208) TRAIN Epoch: [24/32] Batch: [399/704] Time: 0.058 (0.027) Loss: 0.550 (0.586) Error: 0.234 (0.206) TRAIN Epoch: [24/32] Batch: [499/704] Time: 0.018 (0.027) Loss: 0.609 (0.581) Error: 0.172 (0.205) TRAIN Epoch: [24/32] Batch: [599/704] Time: 0.018 (0.028) Loss: 0.717 (0.580) Error: 0.234 (0.205) TRAIN Epoch: [24/32] Batch: [699/704] Time: 0.033 (0.028) Loss: 0.359 (0.580) Error: 0.094 (0.205) VAL Epoch: [24/32] Time: 0.004 (0.015) Loss: 0.477 (0.764) Error: 0.375 (0.230) TRAIN Epoch: [25/32] Batch: [99/704] Time: 0.024 (0.027) Loss: 0.573 (0.568) Error: 0.219 (0.200) TRAIN Epoch: [25/32] Batch: [199/704] Time: 0.100 (0.027) Loss: 0.383 (0.567) Error: 0.156 (0.201) TRAIN Epoch: [25/32] Batch: [299/704] Time: 0.025 (0.027) Loss: 0.537 (0.572) Error: 0.203 (0.200) TRAIN Epoch: [25/32] Batch: [399/704] Time: 0.016 (0.027) Loss: 0.656 (0.574) Error: 0.234 (0.201) TRAIN Epoch: [25/32] Batch: [499/704] Time: 0.026 (0.027) Loss: 0.584 (0.573) Error: 0.203 (0.200) TRAIN Epoch: [25/32] Batch: [599/704] Time: 0.036 (0.027) Loss: 0.603 (0.573) Error: 0.219 (0.201) TRAIN Epoch: [25/32] Batch: [699/704] Time: 0.024 (0.028) Loss: 0.468 (0.575) Error: 0.172 (0.200) VAL Epoch: [25/32] Time: 0.004 (0.014) Loss: 0.467 (0.793) Error: 0.375 (0.234) TRAIN Epoch: [26/32] Batch: [99/704] Time: 0.042 (0.026) Loss: 0.620 (0.561) Error: 0.219 (0.194) TRAIN Epoch: [26/32] Batch: [199/704] Time: 0.019 (0.028) Loss: 0.537 (0.570) Error: 0.172 (0.199) TRAIN Epoch: [26/32] Batch: [299/704] Time: 0.025 (0.028) Loss: 0.859 (0.571) Error: 0.281 (0.198) TRAIN Epoch: [26/32] Batch: [399/704] Time: 0.025 (0.028) Loss: 0.504 (0.575) Error: 0.156 (0.200) TRAIN Epoch: [26/32] Batch: [499/704] Time: 0.022 (0.028) Loss: 0.652 (0.578) Error: 0.234 (0.200) TRAIN Epoch: [26/32] Batch: [599/704] Time: 0.023 (0.028) Loss: 0.677 (0.575) Error: 0.250 (0.200) TRAIN Epoch: [26/32] Batch: [699/704] Time: 0.025 (0.028) Loss: 0.584 (0.575) Error: 0.156 (0.200) VAL Epoch: [26/32] Time: 0.076 (0.018) Loss: 0.499 (0.753) Error: 0.375 (0.232) TRAIN Epoch: [27/32] Batch: [99/704] Time: 0.028 (0.028) Loss: 0.535 (0.585) Error: 0.188 (0.206) TRAIN Epoch: [27/32] Batch: [199/704] Time: 0.036 (0.029) Loss: 0.481 (0.580) Error: 0.172 (0.202) TRAIN Epoch: [27/32] Batch: [299/704] Time: 0.025 (0.029) Loss: 0.592 (0.575) Error: 0.188 (0.202) TRAIN Epoch: [27/32] Batch: [399/704] Time: 0.056 (0.029) Loss: 0.599 (0.574) Error: 0.234 (0.202) TRAIN Epoch: [27/32] Batch: [499/704] Time: 0.021 (0.029) Loss: 0.558 (0.570) Error: 0.156 (0.200) TRAIN Epoch: [27/32] Batch: [599/704] Time: 0.054 (0.029) Loss: 0.527 (0.570) Error: 0.219 (0.200) TRAIN Epoch: [27/32] Batch: [699/704] Time: 0.023 (0.029) Loss: 0.657 (0.571) Error: 0.281 (0.200) VAL Epoch: [27/32] Time: 0.004 (0.008) Loss: 0.519 (0.766) Error: 0.375 (0.230) TRAIN Epoch: [28/32] Batch: [99/704] Time: 0.021 (0.033) Loss: 0.465 (0.570) Error: 0.156 (0.194) TRAIN Epoch: [28/32] Batch: [199/704] Time: 0.025 (0.031) Loss: 0.480 (0.570) Error: 0.172 (0.196) TRAIN Epoch: [28/32] Batch: [299/704] Time: 0.025 (0.031) Loss: 0.463 (0.571) Error: 0.125 (0.198) TRAIN Epoch: [28/32] Batch: [399/704] Time: 0.025 (0.031) Loss: 0.409 (0.568) Error: 0.109 (0.198) TRAIN Epoch: [28/32] Batch: [499/704] Time: 0.010 (0.030) Loss: 0.452 (0.567) Error: 0.156 (0.197) TRAIN Epoch: [28/32] Batch: [599/704] Time: 0.025 (0.030) Loss: 0.606 (0.569) Error: 0.188 (0.198) TRAIN Epoch: [28/32] Batch: [699/704] Time: 0.025 (0.030) Loss: 0.411 (0.567) Error: 0.125 (0.197) VAL Epoch: [28/32] Time: 0.004 (0.016) Loss: 0.461 (0.794) Error: 0.375 (0.233) TRAIN Epoch: [29/32] Batch: [99/704] Time: 0.009 (0.022) Loss: 0.428 (0.548) Error: 0.156 (0.194) TRAIN Epoch: [29/32] Batch: [199/704] Time: 0.037 (0.027) Loss: 0.562 (0.563) Error: 0.219 (0.197) TRAIN Epoch: [29/32] Batch: [299/704] Time: 0.024 (0.027) Loss: 0.383 (0.565) Error: 0.125 (0.198) TRAIN Epoch: [29/32] Batch: [399/704] Time: 0.025 (0.028) Loss: 0.612 (0.567) Error: 0.172 (0.198) TRAIN Epoch: [29/32] Batch: [499/704] Time: 0.025 (0.028) Loss: 0.435 (0.568) Error: 0.141 (0.199) TRAIN Epoch: [29/32] Batch: [599/704] Time: 0.025 (0.028) Loss: 0.715 (0.569) Error: 0.281 (0.199) TRAIN Epoch: [29/32] Batch: [699/704] Time: 0.023 (0.028) Loss: 0.485 (0.569) Error: 0.188 (0.199) VAL Epoch: [29/32] Time: 0.004 (0.013) Loss: 0.466 (0.821) Error: 0.375 (0.232) TRAIN Epoch: [30/32] Batch: [99/704] Time: 0.036 (0.029) Loss: 0.361 (0.566) Error: 0.125 (0.198) TRAIN Epoch: [30/32] Batch: [199/704] Time: 0.025 (0.029) Loss: 0.579 (0.572) Error: 0.219 (0.199) TRAIN Epoch: [30/32] Batch: [299/704] Time: 0.027 (0.029) Loss: 0.423 (0.580) Error: 0.156 (0.201) TRAIN Epoch: [30/32] Batch: [399/704] Time: 0.018 (0.030) Loss: 0.503 (0.575) Error: 0.188 (0.199) TRAIN Epoch: [30/32] Batch: [499/704] Time: 0.026 (0.030) Loss: 0.537 (0.571) Error: 0.188 (0.199) TRAIN Epoch: [30/32] Batch: [599/704] Time: 0.033 (0.030) Loss: 0.615 (0.566) Error: 0.188 (0.197) TRAIN Epoch: [30/32] Batch: [699/704] Time: 0.030 (0.030) Loss: 0.526 (0.564) Error: 0.188 (0.197) VAL Epoch: [30/32] Time: 0.004 (0.020) Loss: 0.465 (0.810) Error: 0.375 (0.232) TRAIN Epoch: [31/32] Batch: [99/704] Time: 0.026 (0.032) Loss: 0.685 (0.565) Error: 0.266 (0.199) TRAIN Epoch: [31/32] Batch: [199/704] Time: 0.025 (0.030) Loss: 0.707 (0.574) Error: 0.312 (0.201) TRAIN Epoch: [31/32] Batch: [299/704] Time: 0.029 (0.029) Loss: 0.628 (0.572) Error: 0.219 (0.200) TRAIN Epoch: [31/32] Batch: [399/704] Time: 0.026 (0.029) Loss: 0.802 (0.572) Error: 0.312 (0.200) TRAIN Epoch: [31/32] Batch: [499/704] Time: 0.026 (0.029) Loss: 0.531 (0.571) Error: 0.188 (0.200) TRAIN Epoch: [31/32] Batch: [599/704] Time: 0.026 (0.029) Loss: 0.416 (0.569) Error: 0.188 (0.200) TRAIN Epoch: [31/32] Batch: [699/704] Time: 0.025 (0.029) Loss: 0.436 (0.567) Error: 0.172 (0.199) VAL Epoch: [31/32] Time: 0.004 (0.013) Loss: 0.488 (0.833) Error: 0.375 (0.231)
torch.save(best_model, "ResNetClassifier")
We will also evaluate the model.
# Eval best model on on test set
test_metrics = {'loss': AverageMeter(), 'error': AverageMeter(), 'batch_time': AverageMeter()}
num_batches = len(test_loader)
for batch_step, batch in enumerate(test_loader):
eval_step(test_metrics, batch, best_model, device)
# log test metrics to console
results = '\t'.join([
'FINAL TEST RESULTS',
f'Loss: {test_metrics["loss"].avg:.3f}',
f'Error: {test_metrics["error"].avg:.3f}',
])
print(results)
FINAL TEST RESULTS Loss: 0.779 Error: 0.220
To identify the mislabelled images we will be using the same baseline of -0.8 as mentioned in the paper.
import pandas as pd
# df_anom = pd.read_csv('./csv/cifar10_high_aum.1d188306.csv')
df_aum = pd.read_csv('./csv/aum_values.csv')
df_normal = df_aum[df_aum['aum'] >= -0.8]
df_anom = df_aum[df_aum['aum'] < -0.8]
# get the list of indexes that are normal and anomalous
normal_idx = df_normal['sample_id'].to_list()
anomalous_idx = df_anom['sample_id'].to_list()
print(f'Number of anomalous images: {len(anomalous_idx)}')
# get the subset of the images that are labelled correct
normal_imgs = torch.utils.data.Subset(train_dataset_torch, normal_idx)
Number of anomalous images: 5270
Once we identified the mislabelled images, we will split them from the main dataset, train another model on the remaining data and classify the removed images using that model.
reclassifier_model = resnet34(num_classes=10)
reclassifier_model = reclassifier_model.to(device)
# Create optimizer & lr scheduler
parameters = [p for p in reclassifier_model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(parameters,
lr=LEARNING_RATE,
momentum=MOMENTUM,
nesterov=True)
milestones = [0.5 * EPOCHS, 0.75 * EPOCHS]
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=milestones, gamma=0.1)
# Keep track of things
best_error = math.inf
# keep the best model
best_reclassifier_model = None
for epoch in range(EPOCHS):
train_loader = DataLoader(normal_imgs,
batch_size=TRAIN_BATCH_SIZE,
shuffle=True,
pin_memory=(torch.cuda.is_available()),
num_workers=0)
# set up the metrics
train_metrics = {
'loss': AverageMeter(),
'error': AverageMeter(),
'batch_time': AverageMeter()
}
val_metrics = {
'loss': AverageMeter(),
'error': AverageMeter(),
'batch_time': AverageMeter()
}
num_batches = len(train_loader)
for batch_step, batch in enumerate(train_loader):
train_step(train_metrics, aum_calculator,
batch_step, num_batches, batch, epoch, EPOCHS, reclassifier_model,
optimizer, device, False)
scheduler.step()
num_batches = len(val_loader)
for batch_step, batch in enumerate(val_loader):
eval_step(val_metrics, batch, reclassifier_model, device)
# show the validation data metrics
results = '\t'.join([
'VAL',
f'Epoch: [{epoch}/{EPOCHS}]',
f'Time: {val_metrics["batch_time"].val:.3f} ({val_metrics["batch_time"].avg:.3f})',
f'Loss: {val_metrics["loss"].val:.3f} ({val_metrics["loss"].avg:.3f})',
f'Error: {val_metrics["error"].val:.3f} ({val_metrics["error"].avg:.3f})',
])
print(results)
# get the best model
if val_metrics['error'].avg < best_error:
best_error = val_metrics['error'].avg
best_reclassifier_model = reclassifier_model
TRAIN Epoch: [0/32] Batch: [99/621] Time: 0.064 (0.025) Loss: 2.394 (4.760) Error: 0.844 (0.882) TRAIN Epoch: [0/32] Batch: [199/621] Time: 0.014 (0.026) Loss: 2.239 (3.567) Error: 0.875 (0.871) TRAIN Epoch: [0/32] Batch: [299/621] Time: 0.055 (0.027) Loss: 2.227 (3.128) Error: 0.891 (0.862) TRAIN Epoch: [0/32] Batch: [399/621] Time: 0.025 (0.028) Loss: 2.308 (2.891) Error: 0.828 (0.854) TRAIN Epoch: [0/32] Batch: [499/621] Time: 0.045 (0.027) Loss: 2.028 (2.743) Error: 0.828 (0.849) TRAIN Epoch: [0/32] Batch: [599/621] Time: 0.023 (0.027) Loss: 2.038 (2.635) Error: 0.766 (0.842) VAL Epoch: [0/32] Time: 0.004 (0.011) Loss: 1.762 (2.132) Error: 0.875 (0.811) TRAIN Epoch: [1/32] Batch: [99/621] Time: 0.025 (0.033) Loss: 2.028 (2.047) Error: 0.766 (0.790) TRAIN Epoch: [1/32] Batch: [199/621] Time: 0.049 (0.032) Loss: 2.006 (2.033) Error: 0.828 (0.788) TRAIN Epoch: [1/32] Batch: [299/621] Time: 0.024 (0.032) Loss: 2.008 (2.017) Error: 0.828 (0.787) TRAIN Epoch: [1/32] Batch: [399/621] Time: 0.024 (0.031) Loss: 1.886 (2.001) Error: 0.750 (0.782) TRAIN Epoch: [1/32] Batch: [499/621] Time: 0.020 (0.031) Loss: 1.980 (1.980) Error: 0.781 (0.772) TRAIN Epoch: [1/32] Batch: [599/621] Time: 0.026 (0.031) Loss: 1.682 (1.964) Error: 0.641 (0.765) VAL Epoch: [1/32] Time: 0.077 (0.018) Loss: 1.910 (1.924) Error: 0.750 (0.731) TRAIN Epoch: [2/32] Batch: [99/621] Time: 0.023 (0.029) Loss: 1.806 (1.852) Error: 0.734 (0.716) TRAIN Epoch: [2/32] Batch: [199/621] Time: 0.044 (0.028) Loss: 1.772 (1.828) Error: 0.719 (0.707) TRAIN Epoch: [2/32] Batch: [299/621] Time: 0.023 (0.029) Loss: 1.658 (1.816) Error: 0.688 (0.701) TRAIN Epoch: [2/32] Batch: [399/621] Time: 0.049 (0.029) Loss: 2.059 (1.807) Error: 0.797 (0.696) TRAIN Epoch: [2/32] Batch: [499/621] Time: 0.023 (0.030) Loss: 1.683 (1.793) Error: 0.703 (0.689) TRAIN Epoch: [2/32] Batch: [599/621] Time: 0.025 (0.030) Loss: 1.511 (1.781) Error: 0.562 (0.684) VAL Epoch: [2/32] Time: 0.004 (0.013) Loss: 2.054 (1.733) Error: 0.875 (0.649) TRAIN Epoch: [3/32] Batch: [99/621] Time: 0.024 (0.030) Loss: 1.719 (1.705) Error: 0.703 (0.646) TRAIN Epoch: [3/32] Batch: [199/621] Time: 0.044 (0.031) Loss: 1.563 (1.692) Error: 0.609 (0.643) TRAIN Epoch: [3/32] Batch: [299/621] Time: 0.029 (0.031) Loss: 1.927 (1.680) Error: 0.656 (0.638) TRAIN Epoch: [3/32] Batch: [399/621] Time: 0.025 (0.030) Loss: 1.784 (1.678) Error: 0.656 (0.635) TRAIN Epoch: [3/32] Batch: [499/621] Time: 0.022 (0.030) Loss: 1.584 (1.669) Error: 0.703 (0.632) TRAIN Epoch: [3/32] Batch: [599/621] Time: 0.051 (0.029) Loss: 1.550 (1.655) Error: 0.609 (0.627) VAL Epoch: [3/32] Time: 0.004 (0.010) Loss: 1.806 (1.562) Error: 0.625 (0.594) TRAIN Epoch: [4/32] Batch: [99/621] Time: 0.046 (0.036) Loss: 1.714 (1.603) Error: 0.641 (0.599) TRAIN Epoch: [4/32] Batch: [199/621] Time: 0.020 (0.033) Loss: 1.628 (1.594) Error: 0.625 (0.592) TRAIN Epoch: [4/32] Batch: [299/621] Time: 0.012 (0.031) Loss: 1.577 (1.591) Error: 0.672 (0.595) TRAIN Epoch: [4/32] Batch: [399/621] Time: 0.017 (0.031) Loss: 1.707 (1.576) Error: 0.594 (0.590) TRAIN Epoch: [4/32] Batch: [499/621] Time: 0.025 (0.031) Loss: 1.278 (1.569) Error: 0.438 (0.586) TRAIN Epoch: [4/32] Batch: [599/621] Time: 0.049 (0.030) Loss: 1.515 (1.562) Error: 0.594 (0.582) VAL Epoch: [4/32] Time: 0.004 (0.015) Loss: 1.680 (1.621) Error: 0.625 (0.571) TRAIN Epoch: [5/32] Batch: [99/621] Time: 0.025 (0.033) Loss: 1.329 (1.483) Error: 0.453 (0.547) TRAIN Epoch: [5/32] Batch: [199/621] Time: 0.024 (0.032) Loss: 1.704 (1.480) Error: 0.625 (0.541) TRAIN Epoch: [5/32] Batch: [299/621] Time: 0.031 (0.031) Loss: 1.187 (1.474) Error: 0.375 (0.541) TRAIN Epoch: [5/32] Batch: [399/621] Time: 0.012 (0.031) Loss: 1.264 (1.472) Error: 0.469 (0.541) TRAIN Epoch: [5/32] Batch: [499/621] Time: 0.028 (0.031) Loss: 1.295 (1.467) Error: 0.469 (0.538) TRAIN Epoch: [5/32] Batch: [599/621] Time: 0.025 (0.031) Loss: 1.175 (1.465) Error: 0.406 (0.537) VAL Epoch: [5/32] Time: 0.004 (0.012) Loss: 1.322 (1.341) Error: 0.500 (0.494) TRAIN Epoch: [6/32] Batch: [99/621] Time: 0.028 (0.034) Loss: 1.621 (1.397) Error: 0.516 (0.509) TRAIN Epoch: [6/32] Batch: [199/621] Time: 0.017 (0.032) Loss: 1.621 (1.413) Error: 0.562 (0.513) TRAIN Epoch: [6/32] Batch: [299/621] Time: 0.024 (0.032) Loss: 1.486 (1.411) Error: 0.625 (0.514) TRAIN Epoch: [6/32] Batch: [399/621] Time: 0.028 (0.032) Loss: 1.138 (1.405) Error: 0.406 (0.513) TRAIN Epoch: [6/32] Batch: [499/621] Time: 0.021 (0.032) Loss: 1.404 (1.401) Error: 0.438 (0.512) TRAIN Epoch: [6/32] Batch: [599/621] Time: 0.024 (0.032) Loss: 1.286 (1.393) Error: 0.500 (0.508) VAL Epoch: [6/32] Time: 0.004 (0.013) Loss: 1.678 (1.538) Error: 0.750 (0.537) TRAIN Epoch: [7/32] Batch: [99/621] Time: 0.010 (0.033) Loss: 1.077 (1.346) Error: 0.375 (0.491) TRAIN Epoch: [7/32] Batch: [199/621] Time: 0.010 (0.030) Loss: 1.403 (1.329) Error: 0.438 (0.482) TRAIN Epoch: [7/32] Batch: [299/621] Time: 0.013 (0.029) Loss: 1.469 (1.325) Error: 0.500 (0.478) TRAIN Epoch: [7/32] Batch: [399/621] Time: 0.016 (0.031) Loss: 1.338 (1.328) Error: 0.484 (0.480) TRAIN Epoch: [7/32] Batch: [499/621] Time: 0.071 (0.031) Loss: 1.594 (1.325) Error: 0.578 (0.478) TRAIN Epoch: [7/32] Batch: [599/621] Time: 0.010 (0.030) Loss: 1.157 (1.323) Error: 0.375 (0.476) VAL Epoch: [7/32] Time: 0.004 (0.008) Loss: 1.295 (1.202) Error: 0.500 (0.441) TRAIN Epoch: [8/32] Batch: [99/621] Time: 0.051 (0.027) Loss: 1.468 (1.275) Error: 0.484 (0.459) TRAIN Epoch: [8/32] Batch: [199/621] Time: 0.076 (0.029) Loss: 1.329 (1.283) Error: 0.469 (0.463) TRAIN Epoch: [8/32] Batch: [299/621] Time: 0.063 (0.030) Loss: 1.192 (1.283) Error: 0.469 (0.466) TRAIN Epoch: [8/32] Batch: [399/621] Time: 0.023 (0.032) Loss: 1.262 (1.273) Error: 0.531 (0.460) TRAIN Epoch: [8/32] Batch: [499/621] Time: 0.043 (0.031) Loss: 1.408 (1.265) Error: 0.500 (0.455) TRAIN Epoch: [8/32] Batch: [599/621] Time: 0.024 (0.032) Loss: 1.114 (1.265) Error: 0.406 (0.454) VAL Epoch: [8/32] Time: 0.004 (0.018) Loss: 1.220 (1.200) Error: 0.500 (0.429) TRAIN Epoch: [9/32] Batch: [99/621] Time: 0.021 (0.033) Loss: 1.108 (1.232) Error: 0.453 (0.444) TRAIN Epoch: [9/32] Batch: [199/621] Time: 0.010 (0.031) Loss: 1.015 (1.223) Error: 0.312 (0.439) TRAIN Epoch: [9/32] Batch: [299/621] Time: 0.026 (0.031) Loss: 1.163 (1.220) Error: 0.391 (0.439) TRAIN Epoch: [9/32] Batch: [399/621] Time: 0.025 (0.031) Loss: 1.213 (1.215) Error: 0.375 (0.436) TRAIN Epoch: [9/32] Batch: [499/621] Time: 0.023 (0.031) Loss: 1.005 (1.207) Error: 0.344 (0.433) TRAIN Epoch: [9/32] Batch: [599/621] Time: 0.016 (0.031) Loss: 1.485 (1.205) Error: 0.484 (0.432) VAL Epoch: [9/32] Time: 0.004 (0.015) Loss: 1.065 (1.184) Error: 0.375 (0.426) TRAIN Epoch: [10/32] Batch: [99/621] Time: 0.024 (0.033) Loss: 1.127 (1.166) Error: 0.359 (0.412) TRAIN Epoch: [10/32] Batch: [199/621] Time: 0.025 (0.033) Loss: 0.972 (1.168) Error: 0.344 (0.413) TRAIN Epoch: [10/32] Batch: [299/621] Time: 0.026 (0.033) Loss: 1.026 (1.176) Error: 0.391 (0.419) TRAIN Epoch: [10/32] Batch: [399/621] Time: 0.024 (0.032) Loss: 1.100 (1.173) Error: 0.469 (0.417) TRAIN Epoch: [10/32] Batch: [499/621] Time: 0.023 (0.032) Loss: 0.947 (1.169) Error: 0.297 (0.416) TRAIN Epoch: [10/32] Batch: [599/621] Time: 0.023 (0.032) Loss: 1.030 (1.163) Error: 0.359 (0.414) VAL Epoch: [10/32] Time: 0.004 (0.020) Loss: 1.229 (1.072) Error: 0.625 (0.378) TRAIN Epoch: [11/32] Batch: [99/621] Time: 0.016 (0.032) Loss: 1.057 (1.131) Error: 0.391 (0.405) TRAIN Epoch: [11/32] Batch: [199/621] Time: 0.027 (0.031) Loss: 1.136 (1.125) Error: 0.391 (0.400) TRAIN Epoch: [11/32] Batch: [299/621] Time: 0.024 (0.030) Loss: 1.165 (1.133) Error: 0.453 (0.403) TRAIN Epoch: [11/32] Batch: [399/621] Time: 0.025 (0.030) Loss: 1.146 (1.142) Error: 0.359 (0.406) TRAIN Epoch: [11/32] Batch: [499/621] Time: 0.027 (0.031) Loss: 1.367 (1.138) Error: 0.438 (0.404) TRAIN Epoch: [11/32] Batch: [599/621] Time: 0.048 (0.031) Loss: 1.274 (1.132) Error: 0.453 (0.403) VAL Epoch: [11/32] Time: 0.004 (0.013) Loss: 1.190 (1.097) Error: 0.375 (0.393) TRAIN Epoch: [12/32] Batch: [99/621] Time: 0.052 (0.032) Loss: 0.963 (1.079) Error: 0.406 (0.383) TRAIN Epoch: [12/32] Batch: [199/621] Time: 0.024 (0.032) Loss: 1.050 (1.111) Error: 0.344 (0.394) TRAIN Epoch: [12/32] Batch: [299/621] Time: 0.023 (0.032) Loss: 1.119 (1.102) Error: 0.328 (0.392) TRAIN Epoch: [12/32] Batch: [399/621] Time: 0.024 (0.032) Loss: 1.181 (1.100) Error: 0.484 (0.390) TRAIN Epoch: [12/32] Batch: [499/621] Time: 0.025 (0.032) Loss: 1.064 (1.094) Error: 0.438 (0.388) TRAIN Epoch: [12/32] Batch: [599/621] Time: 0.023 (0.031) Loss: 0.931 (1.089) Error: 0.281 (0.386) VAL Epoch: [12/32] Time: 0.004 (0.019) Loss: 0.786 (1.015) Error: 0.500 (0.351) TRAIN Epoch: [13/32] Batch: [99/621] Time: 0.028 (0.031) Loss: 0.929 (1.063) Error: 0.297 (0.370) TRAIN Epoch: [13/32] Batch: [199/621] Time: 0.023 (0.029) Loss: 0.990 (1.071) Error: 0.328 (0.379) TRAIN Epoch: [13/32] Batch: [299/621] Time: 0.025 (0.029) Loss: 1.247 (1.068) Error: 0.453 (0.376) TRAIN Epoch: [13/32] Batch: [399/621] Time: 0.033 (0.029) Loss: 1.020 (1.077) Error: 0.375 (0.379) TRAIN Epoch: [13/32] Batch: [499/621] Time: 0.024 (0.030) Loss: 1.211 (1.075) Error: 0.422 (0.379) TRAIN Epoch: [13/32] Batch: [599/621] Time: 0.024 (0.030) Loss: 0.914 (1.068) Error: 0.281 (0.377) VAL Epoch: [13/32] Time: 0.004 (0.010) Loss: 1.271 (1.082) Error: 0.500 (0.381) TRAIN Epoch: [14/32] Batch: [99/621] Time: 0.024 (0.031) Loss: 1.053 (1.059) Error: 0.375 (0.368) TRAIN Epoch: [14/32] Batch: [199/621] Time: 0.057 (0.031) Loss: 0.969 (1.042) Error: 0.344 (0.362) TRAIN Epoch: [14/32] Batch: [299/621] Time: 0.025 (0.031) Loss: 0.968 (1.036) Error: 0.344 (0.363) TRAIN Epoch: [14/32] Batch: [399/621] Time: 0.025 (0.031) Loss: 1.131 (1.029) Error: 0.438 (0.362) TRAIN Epoch: [14/32] Batch: [499/621] Time: 0.024 (0.031) Loss: 1.652 (1.046) Error: 0.609 (0.368) TRAIN Epoch: [14/32] Batch: [599/621] Time: 0.024 (0.031) Loss: 1.375 (1.107) Error: 0.516 (0.391) VAL Epoch: [14/32] Time: 0.004 (0.019) Loss: 1.131 (1.387) Error: 0.500 (0.477) TRAIN Epoch: [15/32] Batch: [99/621] Time: 0.011 (0.029) Loss: 1.176 (1.157) Error: 0.438 (0.412) TRAIN Epoch: [15/32] Batch: [199/621] Time: 0.009 (0.030) Loss: 0.945 (1.154) Error: 0.375 (0.413) TRAIN Epoch: [15/32] Batch: [299/621] Time: 0.058 (0.031) Loss: 0.955 (1.142) Error: 0.359 (0.407) TRAIN Epoch: [15/32] Batch: [399/621] Time: 0.024 (0.032) Loss: 1.203 (1.137) Error: 0.484 (0.404) TRAIN Epoch: [15/32] Batch: [499/621] Time: 0.017 (0.031) Loss: 1.129 (1.128) Error: 0.406 (0.400) TRAIN Epoch: [15/32] Batch: [599/621] Time: 0.011 (0.031) Loss: 1.246 (1.120) Error: 0.391 (0.396) VAL Epoch: [15/32] Time: 0.004 (0.010) Loss: 0.944 (1.376) Error: 0.500 (0.412) TRAIN Epoch: [16/32] Batch: [99/621] Time: 0.043 (0.032) Loss: 0.994 (1.025) Error: 0.375 (0.357) TRAIN Epoch: [16/32] Batch: [199/621] Time: 0.080 (0.032) Loss: 1.130 (0.991) Error: 0.375 (0.349) TRAIN Epoch: [16/32] Batch: [299/621] Time: 0.021 (0.032) Loss: 0.858 (0.972) Error: 0.266 (0.340) TRAIN Epoch: [16/32] Batch: [399/621] Time: 0.024 (0.033) Loss: 1.036 (0.959) Error: 0.297 (0.334) TRAIN Epoch: [16/32] Batch: [499/621] Time: 0.015 (0.033) Loss: 0.795 (0.951) Error: 0.219 (0.333) TRAIN Epoch: [16/32] Batch: [599/621] Time: 0.037 (0.031) Loss: 1.244 (0.949) Error: 0.484 (0.332) VAL Epoch: [16/32] Time: 0.004 (0.010) Loss: 0.875 (0.856) Error: 0.500 (0.293) TRAIN Epoch: [17/32] Batch: [99/621] Time: 0.053 (0.032) Loss: 0.788 (0.891) Error: 0.234 (0.311) TRAIN Epoch: [17/32] Batch: [199/621] Time: 0.081 (0.031) Loss: 0.911 (0.901) Error: 0.328 (0.316) TRAIN Epoch: [17/32] Batch: [299/621] Time: 0.022 (0.031) Loss: 0.861 (0.918) Error: 0.297 (0.323) TRAIN Epoch: [17/32] Batch: [399/621] Time: 0.023 (0.030) Loss: 0.924 (0.914) Error: 0.312 (0.322) TRAIN Epoch: [17/32] Batch: [499/621] Time: 0.023 (0.030) Loss: 1.089 (0.918) Error: 0.391 (0.323) TRAIN Epoch: [17/32] Batch: [599/621] Time: 0.023 (0.030) Loss: 1.059 (0.916) Error: 0.344 (0.322) VAL Epoch: [17/32] Time: 0.004 (0.020) Loss: 0.771 (0.839) Error: 0.500 (0.282) TRAIN Epoch: [18/32] Batch: [99/621] Time: 0.016 (0.029) Loss: 1.007 (0.888) Error: 0.344 (0.315) TRAIN Epoch: [18/32] Batch: [199/621] Time: 0.019 (0.029) Loss: 0.866 (0.898) Error: 0.281 (0.316) TRAIN Epoch: [18/32] Batch: [299/621] Time: 0.062 (0.028) Loss: 0.908 (0.892) Error: 0.281 (0.315) TRAIN Epoch: [18/32] Batch: [399/621] Time: 0.022 (0.028) Loss: 0.945 (0.890) Error: 0.297 (0.313) TRAIN Epoch: [18/32] Batch: [499/621] Time: 0.010 (0.028) Loss: 0.849 (0.894) Error: 0.344 (0.314) TRAIN Epoch: [18/32] Batch: [599/621] Time: 0.025 (0.029) Loss: 0.692 (0.893) Error: 0.250 (0.315) VAL Epoch: [18/32] Time: 0.004 (0.012) Loss: 0.792 (0.926) Error: 0.375 (0.289) TRAIN Epoch: [19/32] Batch: [99/621] Time: 0.025 (0.033) Loss: 1.077 (0.872) Error: 0.438 (0.310) TRAIN Epoch: [19/32] Batch: [199/621] Time: 0.025 (0.031) Loss: 1.005 (0.879) Error: 0.391 (0.311) TRAIN Epoch: [19/32] Batch: [299/621] Time: 0.062 (0.030) Loss: 0.968 (0.878) Error: 0.312 (0.312) TRAIN Epoch: [19/32] Batch: [399/621] Time: 0.019 (0.031) Loss: 1.013 (0.882) Error: 0.328 (0.313) TRAIN Epoch: [19/32] Batch: [499/621] Time: 0.013 (0.031) Loss: 0.948 (0.878) Error: 0.328 (0.311) TRAIN Epoch: [19/32] Batch: [599/621] Time: 0.025 (0.031) Loss: 0.682 (0.878) Error: 0.234 (0.310) VAL Epoch: [19/32] Time: 0.004 (0.011) Loss: 0.743 (0.827) Error: 0.250 (0.273) TRAIN Epoch: [20/32] Batch: [99/621] Time: 0.029 (0.033) Loss: 0.835 (0.868) Error: 0.250 (0.302) TRAIN Epoch: [20/32] Batch: [199/621] Time: 0.024 (0.031) Loss: 0.913 (0.861) Error: 0.359 (0.300) TRAIN Epoch: [20/32] Batch: [299/621] Time: 0.017 (0.031) Loss: 0.764 (0.862) Error: 0.266 (0.300) TRAIN Epoch: [20/32] Batch: [399/621] Time: 0.061 (0.031) Loss: 0.838 (0.854) Error: 0.328 (0.299) TRAIN Epoch: [20/32] Batch: [499/621] Time: 0.024 (0.031) Loss: 0.795 (0.858) Error: 0.281 (0.301) TRAIN Epoch: [20/32] Batch: [599/621] Time: 0.022 (0.031) Loss: 0.936 (0.859) Error: 0.297 (0.302) VAL Epoch: [20/32] Time: 0.076 (0.016) Loss: 0.602 (0.831) Error: 0.375 (0.275) TRAIN Epoch: [21/32] Batch: [99/621] Time: 0.029 (0.028) Loss: 0.803 (0.861) Error: 0.266 (0.301) TRAIN Epoch: [21/32] Batch: [199/621] Time: 0.018 (0.029) Loss: 1.046 (0.855) Error: 0.359 (0.300) TRAIN Epoch: [21/32] Batch: [299/621] Time: 0.024 (0.029) Loss: 0.586 (0.857) Error: 0.172 (0.302) TRAIN Epoch: [21/32] Batch: [399/621] Time: 0.025 (0.029) Loss: 0.791 (0.857) Error: 0.297 (0.301) TRAIN Epoch: [21/32] Batch: [499/621] Time: 0.017 (0.029) Loss: 0.911 (0.851) Error: 0.281 (0.299) TRAIN Epoch: [21/32] Batch: [599/621] Time: 0.012 (0.030) Loss: 0.757 (0.851) Error: 0.312 (0.300) VAL Epoch: [21/32] Time: 0.077 (0.015) Loss: 0.542 (0.760) Error: 0.375 (0.264) TRAIN Epoch: [22/32] Batch: [99/621] Time: 0.025 (0.031) Loss: 1.019 (0.843) Error: 0.344 (0.298) TRAIN Epoch: [22/32] Batch: [199/621] Time: 0.026 (0.030) Loss: 0.761 (0.846) Error: 0.281 (0.299) TRAIN Epoch: [22/32] Batch: [299/621] Time: 0.056 (0.030) Loss: 0.642 (0.839) Error: 0.219 (0.297) TRAIN Epoch: [22/32] Batch: [399/621] Time: 0.023 (0.029) Loss: 1.187 (0.844) Error: 0.359 (0.298) TRAIN Epoch: [22/32] Batch: [499/621] Time: 0.024 (0.028) Loss: 1.071 (0.842) Error: 0.375 (0.298) TRAIN Epoch: [22/32] Batch: [599/621] Time: 0.023 (0.029) Loss: 0.930 (0.841) Error: 0.344 (0.298) VAL Epoch: [22/32] Time: 0.004 (0.013) Loss: 0.620 (0.865) Error: 0.250 (0.277) TRAIN Epoch: [23/32] Batch: [99/621] Time: 0.061 (0.029) Loss: 1.045 (0.819) Error: 0.297 (0.285) TRAIN Epoch: [23/32] Batch: [199/621] Time: 0.060 (0.030) Loss: 0.814 (0.825) Error: 0.297 (0.292) TRAIN Epoch: [23/32] Batch: [299/621] Time: 0.025 (0.030) Loss: 0.997 (0.925) Error: 0.375 (0.326) TRAIN Epoch: [23/32] Batch: [399/621] Time: 0.035 (0.030) Loss: 0.810 (0.924) Error: 0.312 (0.328) TRAIN Epoch: [23/32] Batch: [499/621] Time: 0.014 (0.030) Loss: 0.716 (0.918) Error: 0.234 (0.325) TRAIN Epoch: [23/32] Batch: [599/621] Time: 0.068 (0.030) Loss: 0.918 (0.912) Error: 0.250 (0.323) VAL Epoch: [23/32] Time: 0.004 (0.013) Loss: 0.724 (0.819) Error: 0.500 (0.264) TRAIN Epoch: [24/32] Batch: [99/621] Time: 0.014 (0.033) Loss: 0.932 (0.832) Error: 0.359 (0.296) TRAIN Epoch: [24/32] Batch: [199/621] Time: 0.025 (0.032) Loss: 0.862 (0.840) Error: 0.344 (0.296) TRAIN Epoch: [24/32] Batch: [299/621] Time: 0.023 (0.031) Loss: 0.702 (0.833) Error: 0.234 (0.296) TRAIN Epoch: [24/32] Batch: [399/621] Time: 0.019 (0.030) Loss: 0.690 (0.835) Error: 0.156 (0.296) TRAIN Epoch: [24/32] Batch: [499/621] Time: 0.021 (0.030) Loss: 1.074 (0.836) Error: 0.344 (0.296) TRAIN Epoch: [24/32] Batch: [599/621] Time: 0.025 (0.029) Loss: 0.921 (0.835) Error: 0.344 (0.295) VAL Epoch: [24/32] Time: 0.076 (0.018) Loss: 0.769 (0.802) Error: 0.500 (0.264) TRAIN Epoch: [25/32] Batch: [99/621] Time: 0.057 (0.033) Loss: 0.910 (0.823) Error: 0.312 (0.293) TRAIN Epoch: [25/32] Batch: [199/621] Time: 0.025 (0.034) Loss: 0.693 (0.830) Error: 0.250 (0.294) TRAIN Epoch: [25/32] Batch: [299/621] Time: 0.077 (0.032) Loss: 0.994 (0.837) Error: 0.359 (0.296) TRAIN Epoch: [25/32] Batch: [399/621] Time: 0.044 (0.032) Loss: 0.947 (0.838) Error: 0.344 (0.297) TRAIN Epoch: [25/32] Batch: [499/621] Time: 0.067 (0.032) Loss: 0.868 (0.835) Error: 0.234 (0.296) TRAIN Epoch: [25/32] Batch: [599/621] Time: 0.018 (0.032) Loss: 0.897 (0.832) Error: 0.297 (0.293) VAL Epoch: [25/32] Time: 0.004 (0.010) Loss: 0.789 (0.781) Error: 0.500 (0.261) TRAIN Epoch: [26/32] Batch: [99/621] Time: 0.023 (0.031) Loss: 1.202 (0.820) Error: 0.312 (0.285) TRAIN Epoch: [26/32] Batch: [199/621] Time: 0.023 (0.031) Loss: 0.820 (0.820) Error: 0.297 (0.287) TRAIN Epoch: [26/32] Batch: [299/621] Time: 0.018 (0.031) Loss: 0.992 (0.820) Error: 0.328 (0.288) TRAIN Epoch: [26/32] Batch: [399/621] Time: 0.025 (0.031) Loss: 0.690 (0.817) Error: 0.281 (0.288) TRAIN Epoch: [26/32] Batch: [499/621] Time: 0.012 (0.030) Loss: 1.008 (0.819) Error: 0.266 (0.290) TRAIN Epoch: [26/32] Batch: [599/621] Time: 0.053 (0.030) Loss: 1.223 (0.820) Error: 0.422 (0.291) VAL Epoch: [26/32] Time: 0.004 (0.013) Loss: 0.725 (0.885) Error: 0.500 (0.265) TRAIN Epoch: [27/32] Batch: [99/621] Time: 0.066 (0.032) Loss: 0.823 (0.833) Error: 0.281 (0.293) TRAIN Epoch: [27/32] Batch: [199/621] Time: 0.046 (0.032) Loss: 0.759 (0.825) Error: 0.281 (0.291) TRAIN Epoch: [27/32] Batch: [299/621] Time: 0.016 (0.032) Loss: 0.788 (0.824) Error: 0.328 (0.291) TRAIN Epoch: [27/32] Batch: [399/621] Time: 0.025 (0.031) Loss: 0.964 (0.823) Error: 0.391 (0.290) TRAIN Epoch: [27/32] Batch: [499/621] Time: 0.051 (0.030) Loss: 0.768 (0.823) Error: 0.281 (0.291) TRAIN Epoch: [27/32] Batch: [599/621] Time: 0.027 (0.030) Loss: 0.998 (0.821) Error: 0.359 (0.291) VAL Epoch: [27/32] Time: 0.004 (0.013) Loss: 0.740 (0.911) Error: 0.375 (0.264) TRAIN Epoch: [28/32] Batch: [99/621] Time: 0.012 (0.030) Loss: 0.579 (0.810) Error: 0.203 (0.287) TRAIN Epoch: [28/32] Batch: [199/621] Time: 0.009 (0.029) Loss: 0.877 (0.812) Error: 0.344 (0.287) TRAIN Epoch: [28/32] Batch: [299/621] Time: 0.019 (0.028) Loss: 0.751 (0.816) Error: 0.234 (0.289) TRAIN Epoch: [28/32] Batch: [399/621] Time: 0.032 (0.028) Loss: 0.970 (0.813) Error: 0.375 (0.288) TRAIN Epoch: [28/32] Batch: [499/621] Time: 0.024 (0.028) Loss: 0.756 (0.813) Error: 0.234 (0.288) TRAIN Epoch: [28/32] Batch: [599/621] Time: 0.024 (0.028) Loss: 1.008 (0.815) Error: 0.359 (0.289) VAL Epoch: [28/32] Time: 0.005 (0.011) Loss: 0.677 (0.835) Error: 0.500 (0.259) TRAIN Epoch: [29/32] Batch: [99/621] Time: 0.019 (0.030) Loss: 0.962 (0.831) Error: 0.266 (0.293) TRAIN Epoch: [29/32] Batch: [199/621] Time: 0.010 (0.028) Loss: 0.774 (0.831) Error: 0.250 (0.296) TRAIN Epoch: [29/32] Batch: [299/621] Time: 0.024 (0.028) Loss: 0.708 (0.823) Error: 0.250 (0.291) TRAIN Epoch: [29/32] Batch: [399/621] Time: 0.019 (0.029) Loss: 1.111 (0.821) Error: 0.406 (0.291) TRAIN Epoch: [29/32] Batch: [499/621] Time: 0.022 (0.029) Loss: 0.701 (0.815) Error: 0.312 (0.288) TRAIN Epoch: [29/32] Batch: [599/621] Time: 0.025 (0.029) Loss: 0.831 (0.818) Error: 0.312 (0.288) VAL Epoch: [29/32] Time: 0.004 (0.021) Loss: 0.725 (0.777) Error: 0.500 (0.263) TRAIN Epoch: [30/32] Batch: [99/621] Time: 0.025 (0.031) Loss: 0.487 (0.805) Error: 0.156 (0.286) TRAIN Epoch: [30/32] Batch: [199/621] Time: 0.074 (0.028) Loss: 0.729 (0.810) Error: 0.219 (0.283) TRAIN Epoch: [30/32] Batch: [299/621] Time: 0.015 (0.028) Loss: 0.622 (0.801) Error: 0.250 (0.282) TRAIN Epoch: [30/32] Batch: [399/621] Time: 0.018 (0.029) Loss: 0.942 (0.806) Error: 0.359 (0.285) TRAIN Epoch: [30/32] Batch: [499/621] Time: 0.070 (0.030) Loss: 0.659 (0.807) Error: 0.250 (0.286) TRAIN Epoch: [30/32] Batch: [599/621] Time: 0.025 (0.031) Loss: 0.655 (0.811) Error: 0.250 (0.287) VAL Epoch: [30/32] Time: 0.004 (0.014) Loss: 0.672 (0.778) Error: 0.375 (0.261) TRAIN Epoch: [31/32] Batch: [99/621] Time: 0.024 (0.031) Loss: 0.755 (0.809) Error: 0.250 (0.291) TRAIN Epoch: [31/32] Batch: [199/621] Time: 0.041 (0.032) Loss: 0.854 (0.809) Error: 0.328 (0.285) TRAIN Epoch: [31/32] Batch: [299/621] Time: 0.032 (0.031) Loss: 0.786 (0.814) Error: 0.297 (0.287) TRAIN Epoch: [31/32] Batch: [399/621] Time: 0.053 (0.032) Loss: 0.894 (0.808) Error: 0.344 (0.285) TRAIN Epoch: [31/32] Batch: [499/621] Time: 0.023 (0.031) Loss: 0.845 (0.816) Error: 0.328 (0.288) TRAIN Epoch: [31/32] Batch: [599/621] Time: 0.026 (0.032) Loss: 0.849 (0.812) Error: 0.328 (0.286) VAL Epoch: [31/32] Time: 0.004 (0.014) Loss: 0.715 (0.767) Error: 0.375 (0.259)
torch.save(best_reclassifier_model, "ResNet_Reclassifier")
Now that we have training our reclassifier model, we will reclassify all the wrong images.
pure_train_dataset = datasets.CIFAR10('test', train=True, transform=test_transforms)
anomalous_imgs = torch.utils.data.Subset(pure_train_dataset, anomalous_idx)
anomal_loader = DataLoader(anomalous_imgs,
batch_size=TRAIN_BATCH_SIZE,
shuffle=True,
pin_memory=(torch.cuda.is_available()),
num_workers=0)
X_corrected = None
y_corrected = None
# unscale the image data so that we can reintegrate them back to the tensorflow cifar10
unscale = lambda x: (x * 255)
for batch in anomal_loader:
input_, _ = batch
unscaled_input = unscale(input_.permute(0, 2, 3, 1))
input_ = input_.to(device)
preds = best_reclassifier_model(input_)
if X_corrected is None and y_corrected is None:
X_corrected = unscaled_input.numpy()
y_corrected = torch.argmax(preds, dim=1).cpu().numpy()
else:
X_corrected = np.concatenate((X_corrected, unscaled_input.numpy()), axis=0)
y_corrected = np.concatenate((y_corrected, torch.argmax(preds, dim=1).cpu().numpy()), axis=0)
y_corrected = np.expand_dims(y_corrected, axis=1)
np.savez('corrected_data.npz', X_corrected=X_corrected, y_corrected=y_corrected,anomalous_idx=anomalous_idx )
loaded_data = np.load('./corrected_data2.npz')
X_corrected = loaded_data['X_corrected']
y_corrected = loaded_data['y_corrected']
anomalous_idx = loaded_data['anomalous_idx']
Now we have the inex of the images that are wrongly classfied and also their correct labels. Using this information, we will now switch to tensorflow and use those indexs to shift the data. We would also like to mention that since both tensorflow and pytorch get the data from the same repo, we have confirmed that the indexs of the images are consistent alowing this switch to be possible.
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
# remove images and labels that are anomalous
X_train = np.delete(X_train, anomalous_idx, axis=0)
y_train = np.delete(y_train, anomalous_idx, axis=0)
# and then concatenate the dataset with the corrected label
X_train = np.concatenate((X_train, X_corrected), axis=0)
y_train = np.concatenate((y_train, y_corrected), axis=0)
X_train = torch.from_numpy(X_train).to(device).type(torch.int32)
y_train = torch.from_numpy(y_train).squeeze()
X_test = torch.from_numpy(X_test).to(device).type(torch.int32)
y_test = torch.from_numpy(y_test).squeeze()
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz 170498071/170498071 [==============================] - 8s 0us/step
# Normalisation
scaler = lambda x: (x.type(torch.float32) / 127.5) - 1
X_train_scaled = scaler(X_train)
X_test_scaled = scaler(X_test)
# lastly, I need to reshape our images so that it fits into the pytorch model
X_train_scaled = X_train_scaled.permute(0, 3, 1, 2)
X_test_scaled = X_test_scaled.permute(0, 3, 1, 2)
# One Hot
y_train = torch.from_numpy(to_categorical(y_train).astype('int64')).to(device)
y_test = torch.from_numpy(to_categorical(y_test).astype('int64')).to(device)
train_dataset = TensorDataset(X_train_scaled, y_train)
test_dataset = TensorDataset(X_test_scaled, y_test)
train_loader = DataLoader(train_dataset, batch_size=TRAIN_BATCH_SIZE, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=TRAIN_BATCH_SIZE, shuffle=False)
Data Augmentation¶
Since we have limited data of just 50,000 training images, data augmentations could help us generate more while also reducing overfitting in the dicriminator. While traditional data augmentation could be viable for this project, we propose to use a more novel approach as described in this paper where Differentiable Augmentation was used to get state of the art results.

We experiment on the class-conditional BigGAN [2] and CR-BigGAN [50] and unconditional
StyleGAN2 [18] models. For a fair comparison, we also augment real images with random horizontal flips for all the baselines. The baseline models already adopt advanced regularization techniques, including Spectral Normalization [28], Consistency Regularization [50], and R1 regularization [27]; however, none of them achieves satisfactory results under the 10% data setting. For DiffAugment, we adopt Translation + Cutout for the BigGAN models, Color + Cutout for StyleGAN2 with 100% data, and Color + Translation + Cutout for StyleGAN2 with 10% or 20% data. As summarized in Table 4, our method improves all the baselines independently of the baseline architectures, regularizations, and loss functions (hinge loss in BigGAN and non-saturating loss in StyleGAN2) without any hyperparameter changes. We refer the readers to the appendix (Tables 6-7) for the complete tables with IS. The improvements are considerable especially when limited data is available. This is, to our knowledge, the new state of the art on CIFAR-10 and CIFAR-100 for both class-conditional and unconditional generation under all the 10%, 20%, and 100% data settings.
Differential Augmentation as the name suggests adds image augmentation which are differentiable to the Discriminator model. Instead of introducing random noise, the paper, we allow the model to adapt its augmentation strategy based on the learning objective. The paper also proposes a "Augment everywhere" idea which suggests to apply augmentation not just to the real data but also the fake data. This approach will definitely help with mitigating mode collapse as the generator would have to generate more variety of images to fool the discriminator.
Although it is possible to implement DiffAugment from scratch using PyTorch and we have tried it. We would like to simplify the process by using a library called Kornia. Kornia is a open source computer vision library that is designed for differtiable computer vision operations. One of its main selling points is its close integration with pytorch. All of its modules are actually valid PyTorch objects and thus it ontegrates well into our exisiting code.
diffaugment = nn.Sequential(
K.ColorJitter(brightness=0.2, contrast=0.2, saturation=0.2, hue=0.2, p=0.5),
K.RandomAffine(degrees=0, translate=(0.2, 0.2), scale=(1.0, 1.0), p=0.5),
K.RandomErasing(scale=(0.1, 0.2), ratio=(0.3, 3.3), same_on_batch=False, p=0.5)
)
🗺️ Navigating the CIFAR-10 Landscape¶
We will using a series of visualisations to take a look at our dataset.
Sample Image¶
I visualized a sample image from the training dataset, showcasing a 32 x 32 size image. The image was displayed with the corresponding class label, providing a quick overview of the dataset.
# Loading Tensorflow Dataset
(X_train_eda, y_train_eda), (X_test_eda, y_test_eda) = cifar10.load_data()
X_train_eda = torch.from_numpy(X_train_eda).to(device).type(torch.int32)
y_train_eda = torch.from_numpy(y_train_eda).squeeze()
X_test_eda = torch.from_numpy(X_test_eda).to(device).type(torch.int32)
y_test_eda = torch.from_numpy(y_test_eda).squeeze()
train_dataset_eda = TensorDataset(X_train_eda, y_train_eda)
test_dataset_eda = TensorDataset(X_test_eda, y_test_eda)
sample_image, label = train_dataset_eda[69]
plt.title(f"Image of a {CLASSES[label]}")
plt.imshow(sample_image.cpu().squeeze().numpy())
plt.xticks(range(0, sample_image.shape[0], 4))
plt.yticks(range(0, sample_image.shape[1], 4))
plt.show()
Dataset Split Overview¶
I examined the split between the training and testing datasets, revealing a balanced distribution of 50,000 images for training and 10,000 images for testing.
split = [len(train_dataset_eda),len(test_dataset_eda)]
train_test = ["Train", "Test"]
print(f"""
Training Images: {split[0]}
Testing Images: {split[1]}
""")
plt.pie(split, labels=train_test, autopct='%1.1f%%', startangle=90)
plt.title('Dataset Split: Train vs Test')
plt.show()
Training Images: 50000 Testing Images: 10000
Class Distribution in Train and Test Sets¶
I analyzed the distribution of classes in both the training and testing sets, confirming a well-balanced distribution across all classes.
train_counts = {k: 0 for k in CLASSES}
test_counts = {k: 0 for k in CLASSES}
for _, labels in train_dataset_eda:
class_name = CLASSES[labels]
train_counts[class_name] += 1
for _, labels in test_dataset_eda:
class_name = CLASSES[labels]
test_counts[class_name] += 1
fig, ax = plt.subplots(figsize=(10, 6))
plt.bar(range(len(CLASSES)), train_counts.values(), 0.45, label='Train')
plt.bar(range(len(CLASSES)), test_counts.values(), 0.45, label='Test')
plt.xlabel('Classes')
plt.ylabel('Count')
plt.title('Class Distribution in CIFAR-10 Train and Test Sets')
plt.xticks(range(len(CLASSES)), CLASSES)
plt.legend()
plt.show()
Overview of Class Images¶
I provided an overview of images from each class in the training dataset, offering a visual representation of the distinct classes.
class_images = {k: None for k in range(len(CLASSES))}
fig, axes = plt.subplots(1, len(CLASSES), figsize=(15, 6))
i = 0
for images, labels in train_dataset_eda:
if class_images[labels.item()] is None:
axes[i].imshow(images.cpu().squeeze().numpy())
axes[i].set_title(CLASSES[labels.item()])
axes[i].axis('off')
i+=1
class_images[labels.item()] = True
fig.suptitle("Images - An Overview")
plt.tight_layout()
fig.subplots_adjust(top=1.5)
plt.show()
Averaging Classes¶
I calculated the average image for each class in the training dataset, providing a visual representation of the rough outlines for the car, horse, and truck classes.
class_averages = {}
class_counts = {}
for data, target in train_dataset_eda:
target = target.item()
if target not in class_averages:
class_averages[target] = torch.zeros_like(data)
class_counts[target] = 0
class_averages[target] += data
class_counts[target] += 1
i = 0
fig, axes = plt.subplots(1, len(CLASSES), figsize=(15, 6))
for key in class_averages.keys():
img = (class_averages[key] / class_counts[key]) / 255
axes[i].imshow(img.cpu().squeeze().numpy())
axes[i].set_title(CLASSES[key])
axes[i].axis('off')
i+=1
fig.suptitle("Averaging Classes")
plt.tight_layout()
fig.subplots_adjust(top=1.5)
IMAGE_SIZE = 32 # image size, number of pixel, assuming square image (length = width)
CHANNELS = 3 # number of channels, which in our case is 3, because RGB
NUM_CLASS = 10 # number of classes, which in cifar-10, is 10
TRAIN_BATCH_SIZE = 512 # Adjust based on how powerful GPU is, 2048 for RTX 4090 but 512 for P100
CLASSES = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
📊 Metrics: Precision and Creativity¶
GANs are notoriously difficult to score. Evaluating the effectiveness of GAN is still an area of ongoing research and new techniques are still being found. Hoewever, there are still some commonly used metrics which could help us gauge a model's performance here.
Inception Score (higher is better)¶
This metric makes use of the Inception V3 model to generate a softmax probability distribution of a given image. For context, Inception V3 is a classification model training on the ImageNet dataset which has 1000 classes.

When we input an image into the model, it generates a probability distribtuion indicating which class the image is more likely to belong to.

We then compare the distribution to get the Inception Score to a marginal distribution. The way this is done is using somthing called KL divergence.

If a image is not very clear or badly generated, the idea is that the softmax probability distribution will be more uniform. On the other hand when the image is very clear, it will be less uniform and concentrated around a certain label, say Dog.
Fréchet Inception Distance (lower the better)¶
FID builds on the Inception score and make it even better. Rather than just analysing the distribution of the generated imaged, FID, compares the distirbution of the real images and fake images using Fréchet Distance. Unlike, the Inception Score, rather than using the final layer softmax probabilities, the activations of the final pooling layers are used to extract high level semantic information about the characteristics of an image.

Here, if the generated images are closer to the actual images, it suggests that they are very realistic, resulting in a lower score. Therefore, the lower FID suggests higher quality images.
Kernel Inception Distance (lower the better)¶
KID was introduced to address some of the bias present in FID, especially for small datasets. Rather than using Fréchet Distance to compare two distributions, KID uses Maximum Mean Discrepancy instead.
The following is an excerpt from a paper on Evaluation of Generative Models
The Kernel Inception Distance (KID) [16] aims to improve on FID by relaxing the Gaussian assumption. KID measures the squared Maximum Mean Discrepancy (MMD) between the Inception
representations of the real and generated samples using a polynomial kernel. This is a non-parametric test so it does not have the strict Gaussian assumption, only assuming that the kernel is a good similarity measure. It also requires fewer samples as we do not need to fit the quadratic covariance matrix.
Since our CIFAR-10 dataset is a relatviely smaller dataset, we feel that KID is a more suitable metric for this project. But we will use both FID and KID for completeness.
Torch Metric¶
To use KID in our project, we will be using the Torchmetric library which provides a fast and precise wasy of calculating all the metrics mentioned here.
Based on online data, these are the best possible scores our model can get.
| Metric | Value |
|---|---|
| Frechet Inception Distance | 3.14 |
| Kernel Inception Distance | ≈ 0 |
These scores will help us gauge how close our model is to the best possible score.
import PIL.Image as Image
fid_metric = FrechetInceptionDistance(feature=2048, reset_real_features=False).to(device)
kid_metric = KernelInceptionDistance(feature=2048, reset_real_features=False).to(device)
transform = transforms.Compose([
transforms.Resize((299, 299), Image.BILINEAR,antialias=True),
])
for batch in tqdm(train_loader):
real_images = batch[0].to(device)
real_images = transform(real_images)
#real_images = torch.stack([transform(img) for img in real_images]).to(device)
real_images = real_images * 255
real_images = real_images.to(torch.uint8)
fid_metric.update(real_images, real=True)
kid_metric.update(real_images, real=True)
del real_images, batch
torch.cuda.empty_cache()
Downloading: "https://github.com/toshas/torch-fidelity/releases/download/v0.2.0/weights-inception-2015-12-05-6726825d.pth" to /root/.cache/torch/hub/checkpoints/weights-inception-2015-12-05-6726825d.pth 100%|██████████| 91.2M/91.2M [00:01<00:00, 57.4MB/s] 100%|██████████| 98/98 [01:14<00:00, 1.32it/s]
🤖 Beep Boop! GAN Comes to Life¶
The CIFAR-10 dataset presents a unique challenge due to its diverse set of 10 classes. A vanilla GAN trained on such a dataset would generate images that blend all classes, resulting in incomprehensible outputs. To address this limitation and gain better control over image generation, we turn to a more sophisticated approach – Conditional GAN (cGAN). The concept of cGAN was introduced in the research paper by Mirza, M. and Osindero, S. (2014) and we will drawing inspiration from it.
Why cGAN?¶
Class-Specific Generation:
- Vanilla GANs mix all classes during training, leading to ambiguous outputs.
- cGAN allows us to input a class label ($y$) into both the discriminator ($D$) and generator ($G$), enabling class-specific image generation.
Conditional Input:
- In cGAN, the label $y$ serves as a condition for both $D$ and $G$, influencing the generation process.
- Mathematically, this is represented as $G(z|y)$, indicating that the generator takes a vector $z$ as input, given the condition $y$.
How cGAN Works:¶
Model Architecture:
- Both the generator and discriminator receive the class label ($y$) as an additional input beside the random noise ($z$) and image ($x$), as illustrated below.

Training Objective:
- By incorporating the class label during training, the cGAN learns to distinguish and generate images specific to the input class.
Our Modifications
We will be using the following suggested modifications from a paper on Deep Convolutional GAN
- Replacing any pooling layers with strided convolutions (discriminator) and fractional-strided convolutions (generator).
- Using batchnorm in both the generator and the discriminator.
- Removing fully connected hidden layers for deeper architectures.
- Using ReLU activation in generator for all layers except for the output, which uses tanh.
- Using LeakyReLU activation in the discriminator for all layer.
# Generator architecture
class SimpleGenerator(nn.Module):
def __init__(self, latent_dim, hidden_dim):
super(SimpleGenerator, self).__init__()
self.hidden_dim = hidden_dim
self.latent_dim = latent_dim
self.input_layers = nn.Sequential(
nn.Linear(latent_dim + NUM_CLASS, hidden_dim*2),
nn.Linear(hidden_dim*2, hidden_dim),
nn.LeakyReLU(0.1, inplace=True)
)
self.conv_layers = nn.Sequential(
nn.ConvTranspose2d(int(hidden_dim/4), 128, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(128),
nn.LeakyReLU(True),
nn.ConvTranspose2d(128, 64, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(64),
nn.LeakyReLU(True),
nn.ConvTranspose2d(64, 32, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(32),
nn.LeakyReLU(True),
nn.ConvTranspose2d(32, CHANNELS, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(CHANNELS),
nn.LeakyReLU(True),
nn.Tanh()
)
def forward(self, noise, classes):
inputs = torch.cat((classes, noise), 1)
outputs = self.input_layers(inputs)
reshape_shape = int(self.hidden_dim/4)
outputs = torch.reshape(outputs, (outputs.size()[0], reshape_shape, 2, 2))
return self.conv_layers(outputs)
Here, instead of adding the label embeddings into the input image, I will concatenate it to the intermediate representation.
# Discriminator architecture
class SimpleDiscriminator(nn.Module):
def __init__(self):
super(SimpleDiscriminator, self).__init__()
self.conv_layers = nn.Sequential(
nn.Conv2d(CHANNELS, 32, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(32),
nn.LeakyReLU(0.1, inplace=True),
nn.Conv2d(32, 64, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(64),
nn.LeakyReLU(0.1, inplace=True),
nn.Conv2d(64, 128, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(128),
nn.LeakyReLU(0.1, inplace=True),
nn.Conv2d(128, 256, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(256),
nn.LeakyReLU(0.1, inplace=True),
nn.AvgPool2d(2, stride=2)
)
self.output_layers = nn.Sequential(
nn.Linear(256 + NUM_CLASS, 512),
nn.LeakyReLU(0.1, inplace=True),
nn.Linear(512, 1),
nn.Sigmoid()
)
def forward(self, x, labels):
output = self.conv_layers(x).squeeze()
x = torch.cat((output,labels), dim=1)
x = self.output_layers(x)
return x
Model¶
The cGAN class has the architecture of a Conditional Generative Adversarial Network
Architecture:¶
- The cGAN integrates a discriminator and a generator
- Following GAN princples, these two models will compete to fool each other
Training:¶
- The cGAN engages in adversarial training, where the generator learns to produce realistic images, and the discriminator learns to distinguish between real and generated samples.
- Separate optimizers are defined for the generator and discriminator.
- Optimizers also can have different learning rates
Conditioning¶
- The generator is tasked with producing synthetic images based on random noise and provided class labels.
- While there are many methods of conditioning, for our base model we decided to go ahead with simple concatenation
Loss¶
- In the original implementation of Minimax loss the authors has observed problems in the early stages of GAN training. To combat this we can use the modified BCELoss.
class cGAN(nn.Module):
def __init__(self, generator, discriminator, train_loader):
super().__init__()
self.discriminator = discriminator.apply(self.init_weight)
self.generator = generator.apply(self.init_weight)
self.g_opt = optim.Adam(self.generator.parameters(),
lr=0.0002, betas=(0.5, 0.999),
weight_decay=2e-5)
self.d_opt = optim.Adam(self.discriminator.parameters(),
lr=0.0002, betas=(0.5, 0.999),
weight_decay=2e-5)
self.fid_metric = fid_metric
self.kid_metric = kid_metric
self.loss = nn.BCELoss()
self.disc_scores, self.gen_scores, self.fid_scores = [], [], []
self.kid_scores, self.fid_epochs = [], []
self.best_model = None
self.best_score = 1000
def init_weight(self,layer):
name = layer.__class__.__name__
if name.find("BatchNorm") != -1:
nn.init.normal_(layer.weight, 1.0, 0.02)
nn.init.zeros_(layer.bias)
if name.find("Conv") != -1:
nn.init.normal_(layer.weight, 0.0, 0.02)
def gen_step(self,img,label):
self.g_opt.zero_grad()
img = img.to(device)
label = label.to(device)
noise = torch.normal(0, 1, (img.size()[0], self.generator.latent_dim), device=device)
fake_imgs = self.generator(noise, label)
fake_pred = self.discriminator(fake_imgs, label)
real_label = torch.ones((img.size()[0], 1), device=device)
g_loss = self.loss(fake_pred, real_label)
g_loss.backward()
self.g_opt.step()
return g_loss.cpu().item()
def disc_step(self,img,label):
self.d_opt.zero_grad()
img = img.to(device)
label = label.to(device)
noise = torch.normal(0, 1, (img.size()[0], self.generator.latent_dim), device=device)
fake_imgs = self.generator(noise, label)
fake_pred = self.discriminator(fake_imgs, label)
real_pred = self.discriminator(img, label)
fake_label = torch.zeros((img.size()[0], 1), device=device)
real_label = torch.ones((img.size()[0], 1), device=device)
d_loss = (self.loss(fake_pred, fake_label) + self.loss(real_pred, real_label)) / 2
d_loss.backward()
self.d_opt.step()
return d_loss.cpu().item()
def fit(self, epochs, train_loader):
print(f"Training {self.__class__.__name__} for {epochs} Epochs")
self.discriminator.train()
self.generator.train()
for epoch in range(epochs):
disc_losses, gen_losses = [], []
progress = tqdm(train_loader, desc=f'Training Epoch {epoch + 1}/{epochs}', leave=True, colour="green", dynamic_ncols=True)
for img, label in progress:
disc_loss = self.disc_step(img, label)
gen_loss = self.gen_step(img, label)
disc_losses.append(disc_loss)
gen_losses.append(gen_loss)
progress.set_postfix(disc_loss=disc_loss, gen_loss=gen_loss)
self.disc_scores.append(np.mean(disc_losses))
self.gen_scores.append(np.mean(gen_losses))
self.on_epoch_end(epoch)
def on_epoch_end(self, epoch):
if epoch%20 == 0:
fid,kid = self.get_fid_kid()
self.fid_epochs.append(epoch)
self.fid_scores.append(fid)
self.kid_scores.append(kid)
print(f"FID: {fid}, KID: {kid}")
if epoch%50 == 0:
img, label = self.generate_samples(10)
self.display_images(img, label)
img, label = self.generate_samples(10)
self.display_images(img, label)
def generate_samples(self,n):
self.generator.eval()
noise = torch.normal(0, 1, (n, self.generator.latent_dim), device=device)
random_indices = torch.randint(0, 10, size=(n,), device = device)
sample_classes = torch.eye(10, device = device)[random_indices]
generated_imgs = self.generator(noise, sample_classes)
generated_imgs = (generated_imgs.cpu().detach().permute(0, 2, 3, 1).numpy() - (-1)) / (1 - (-1)) # scale from [-1, 1] to [0, 1]
return generated_imgs, sample_classes
def get_fid_kid(self):
self.fid_metric.reset()
self.kid_metric.reset()
for _ in tqdm(range(10000//TRAIN_BATCH_SIZE)):
with torch.no_grad():
latent_space = torch.normal(
0, 1, (TRAIN_BATCH_SIZE, self.generator.latent_dim), device=device, requires_grad=False)
gen_labels = torch.randint(
0, 10, (TRAIN_BATCH_SIZE,), device=device, requires_grad=False)
gen_labels = torch.nn.functional.one_hot(gen_labels, 10)
fake_imgs = self.generator(latent_space, gen_labels)
fake_imgs = torch.tensor(fake_imgs).to(device)
fake_imgs = transform(fake_imgs)
fake_imgs = (fake_imgs * 255).to(torch.uint8)
self.fid_metric.update(fake_imgs, real=False)
self.kid_metric.update(fake_imgs, real=False)
fid = self.fid_metric.compute().cpu().numpy()
kid = self.kid_metric[0].compute().cpu().numpy()
if fid < self.best_score:
self.best_score = fid
self.best_model = [self.generator, self.discriminator]
return fid, kid
def display_images(self, imgs, labels):
num_cols = 5
num_rows = (10 + num_cols - 1) // num_cols
fig, axs = plt.subplots(num_rows, num_cols, figsize=(12, 6))
axs = axs.flatten()
for i in range(10):
axs[i].imshow(imgs[i])
class_index = labels[i].nonzero().item()
axs[i].set_title(CLASSES[class_index])
axs[i].axis('off')
plt.tight_layout()
plt.show()
def save(self, name=None):
time_post = int(time.time())
if name is None:
torch.save(self.generator,f"{self.__class__.__name__}-gen_{time_post}.pt")
torch.save(self.discriminator,f"{self.__class__.__name__}-disc_{time_post}.pt")
best_gen, best_disc = self.best_model
torch.save(self.generator,f"{self.__class__.__name__}-best_gen_{time_post}.pt")
torch.save(self.discriminator,f"{self.__class__.__name__}-best_disc_{time_post}.pt")
else:
torch.save(self.generator,f"{name}-gen_{time_post}.pt")
torch.save(self.discriminator,f"{name}-disc_{time_post}.pt")
best_gen, best_disc = self.best_model
torch.save(self.generator,f"{name}-best_gen_{time_post}.pt")
torch.save(self.discriminator,f"{name}-best_disc_{time_post}.pt")
For subsequency models, I will be inheriting from this class and overidding the required methods.
gen_simple = SimpleGenerator(128,1024).to(device)
disc_simple = SimpleDiscriminator().to(device)
cgan = cGAN(gen_simple,disc_simple,train_loader)
cgan.fit(501,train_loader)
plot_losses(501,[(cgan,"Simple")])
cgan.save("cgan-500e-v2")
torch.cuda.empty_cache()
Training cGAN for 501 Epochs
Training Epoch 1/501: 100%|██████████| 98/98 [00:02<00:00, 43.77it/s, disc_loss=0.563, gen_loss=1.26]
100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 108.72743225097656, KID: 0.1169365644454956
Training Epoch 2/501: 100%|██████████| 98/98 [00:02<00:00, 43.80it/s, disc_loss=0.591, gen_loss=1.17] Training Epoch 3/501: 100%|██████████| 98/98 [00:02<00:00, 47.93it/s, disc_loss=0.609, gen_loss=1.25] Training Epoch 4/501: 100%|██████████| 98/98 [00:02<00:00, 43.97it/s, disc_loss=0.554, gen_loss=1.33] Training Epoch 5/501: 100%|██████████| 98/98 [00:02<00:00, 46.05it/s, disc_loss=0.517, gen_loss=1.29] Training Epoch 6/501: 100%|██████████| 98/98 [00:02<00:00, 48.34it/s, disc_loss=0.563, gen_loss=1.65] Training Epoch 7/501: 100%|██████████| 98/98 [00:02<00:00, 46.38it/s, disc_loss=0.596, gen_loss=1.27] Training Epoch 8/501: 100%|██████████| 98/98 [00:02<00:00, 46.72it/s, disc_loss=0.586, gen_loss=1.54] Training Epoch 9/501: 100%|██████████| 98/98 [00:01<00:00, 58.65it/s, disc_loss=0.53, gen_loss=1.37] Training Epoch 10/501: 100%|██████████| 98/98 [00:01<00:00, 51.85it/s, disc_loss=0.466, gen_loss=1.77] Training Epoch 11/501: 100%|██████████| 98/98 [00:01<00:00, 49.40it/s, disc_loss=0.56, gen_loss=1.74] Training Epoch 12/501: 100%|██████████| 98/98 [00:02<00:00, 45.35it/s, disc_loss=0.501, gen_loss=2.11] Training Epoch 13/501: 100%|██████████| 98/98 [00:01<00:00, 51.25it/s, disc_loss=0.394, gen_loss=1.65] Training Epoch 14/501: 100%|██████████| 98/98 [00:01<00:00, 51.37it/s, disc_loss=0.454, gen_loss=1.81] Training Epoch 15/501: 100%|██████████| 98/98 [00:02<00:00, 47.78it/s, disc_loss=0.463, gen_loss=2.78] Training Epoch 16/501: 100%|██████████| 98/98 [00:01<00:00, 49.09it/s, disc_loss=0.294, gen_loss=3.46] Training Epoch 17/501: 100%|██████████| 98/98 [00:02<00:00, 43.77it/s, disc_loss=0.457, gen_loss=2.38] Training Epoch 18/501: 100%|██████████| 98/98 [00:01<00:00, 50.58it/s, disc_loss=0.249, gen_loss=2.47] Training Epoch 19/501: 100%|██████████| 98/98 [00:02<00:00, 39.94it/s, disc_loss=0.324, gen_loss=2.55] Training Epoch 20/501: 100%|██████████| 98/98 [00:01<00:00, 51.25it/s, disc_loss=0.344, gen_loss=2.06] Training Epoch 21/501: 100%|██████████| 98/98 [00:02<00:00, 44.51it/s, disc_loss=0.255, gen_loss=2.66] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 53.00499725341797, KID: 0.041580237448215485
Training Epoch 22/501: 100%|██████████| 98/98 [00:02<00:00, 45.81it/s, disc_loss=0.377, gen_loss=2.43] Training Epoch 23/501: 100%|██████████| 98/98 [00:02<00:00, 47.46it/s, disc_loss=0.453, gen_loss=2.89] Training Epoch 24/501: 100%|██████████| 98/98 [00:02<00:00, 45.45it/s, disc_loss=0.254, gen_loss=2.81] Training Epoch 25/501: 100%|██████████| 98/98 [00:02<00:00, 45.33it/s, disc_loss=0.318, gen_loss=2.65] Training Epoch 26/501: 100%|██████████| 98/98 [00:02<00:00, 48.10it/s, disc_loss=0.321, gen_loss=2.51] Training Epoch 27/501: 100%|██████████| 98/98 [00:02<00:00, 43.83it/s, disc_loss=0.345, gen_loss=4.34] Training Epoch 28/501: 100%|██████████| 98/98 [00:02<00:00, 46.20it/s, disc_loss=0.221, gen_loss=3.57] Training Epoch 29/501: 100%|██████████| 98/98 [00:02<00:00, 46.50it/s, disc_loss=0.373, gen_loss=5.09] Training Epoch 30/501: 100%|██████████| 98/98 [00:02<00:00, 41.85it/s, disc_loss=0.239, gen_loss=3.74] Training Epoch 31/501: 100%|██████████| 98/98 [00:02<00:00, 42.59it/s, disc_loss=0.282, gen_loss=5.34] Training Epoch 32/501: 100%|██████████| 98/98 [00:02<00:00, 44.51it/s, disc_loss=0.369, gen_loss=3.63] Training Epoch 33/501: 100%|██████████| 98/98 [00:02<00:00, 47.96it/s, disc_loss=0.463, gen_loss=5.12] Training Epoch 34/501: 100%|██████████| 98/98 [00:02<00:00, 44.18it/s, disc_loss=0.218, gen_loss=5.66] Training Epoch 35/501: 100%|██████████| 98/98 [00:02<00:00, 39.89it/s, disc_loss=0.278, gen_loss=4.85] Training Epoch 36/501: 100%|██████████| 98/98 [00:02<00:00, 44.41it/s, disc_loss=0.167, gen_loss=3.3] Training Epoch 37/501: 100%|██████████| 98/98 [00:02<00:00, 47.85it/s, disc_loss=0.287, gen_loss=4.42] Training Epoch 38/501: 100%|██████████| 98/98 [00:02<00:00, 48.21it/s, disc_loss=0.145, gen_loss=3.1] Training Epoch 39/501: 100%|██████████| 98/98 [00:02<00:00, 44.99it/s, disc_loss=0.463, gen_loss=1.63] Training Epoch 40/501: 100%|██████████| 98/98 [00:02<00:00, 45.72it/s, disc_loss=0.182, gen_loss=3.21] Training Epoch 41/501: 100%|██████████| 98/98 [00:01<00:00, 51.16it/s, disc_loss=0.259, gen_loss=2.66] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 63.55277633666992, KID: 0.04315266013145447
Training Epoch 42/501: 100%|██████████| 98/98 [00:02<00:00, 46.04it/s, disc_loss=0.153, gen_loss=3.59] Training Epoch 43/501: 100%|██████████| 98/98 [00:02<00:00, 44.46it/s, disc_loss=0.168, gen_loss=3.98] Training Epoch 44/501: 100%|██████████| 98/98 [00:02<00:00, 45.85it/s, disc_loss=0.496, gen_loss=2.12] Training Epoch 45/501: 100%|██████████| 98/98 [00:02<00:00, 43.53it/s, disc_loss=0.204, gen_loss=3.02] Training Epoch 46/501: 100%|██████████| 98/98 [00:02<00:00, 46.12it/s, disc_loss=0.148, gen_loss=4.34] Training Epoch 47/501: 100%|██████████| 98/98 [00:02<00:00, 45.96it/s, disc_loss=0.161, gen_loss=4.33] Training Epoch 48/501: 100%|██████████| 98/98 [00:02<00:00, 45.46it/s, disc_loss=0.153, gen_loss=4.91] Training Epoch 49/501: 100%|██████████| 98/98 [00:02<00:00, 44.63it/s, disc_loss=0.175, gen_loss=4.4] Training Epoch 50/501: 100%|██████████| 98/98 [00:02<00:00, 44.15it/s, disc_loss=0.13, gen_loss=3.66] Training Epoch 51/501: 100%|██████████| 98/98 [00:02<00:00, 45.26it/s, disc_loss=0.111, gen_loss=2.83]
Training Epoch 52/501: 100%|██████████| 98/98 [00:01<00:00, 49.68it/s, disc_loss=0.252, gen_loss=2.96] Training Epoch 53/501: 100%|██████████| 98/98 [00:02<00:00, 45.20it/s, disc_loss=0.111, gen_loss=3.88] Training Epoch 54/501: 100%|██████████| 98/98 [00:02<00:00, 46.00it/s, disc_loss=0.176, gen_loss=3.95] Training Epoch 55/501: 100%|██████████| 98/98 [00:01<00:00, 49.90it/s, disc_loss=0.306, gen_loss=5.3] Training Epoch 56/501: 100%|██████████| 98/98 [00:02<00:00, 43.88it/s, disc_loss=0.137, gen_loss=3.6] Training Epoch 57/501: 100%|██████████| 98/98 [00:02<00:00, 45.87it/s, disc_loss=0.145, gen_loss=3.88] Training Epoch 58/501: 100%|██████████| 98/98 [00:02<00:00, 39.39it/s, disc_loss=0.182, gen_loss=2.76] Training Epoch 59/501: 100%|██████████| 98/98 [00:01<00:00, 50.31it/s, disc_loss=1.04, gen_loss=4.25] Training Epoch 60/501: 100%|██████████| 98/98 [00:01<00:00, 56.24it/s, disc_loss=0.119, gen_loss=3.6] Training Epoch 61/501: 100%|██████████| 98/98 [00:02<00:00, 44.13it/s, disc_loss=0.557, gen_loss=5.17] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 64.534423828125, KID: 0.04101591184735298
Training Epoch 62/501: 100%|██████████| 98/98 [00:02<00:00, 44.96it/s, disc_loss=0.174, gen_loss=5.39] Training Epoch 63/501: 100%|██████████| 98/98 [00:02<00:00, 46.42it/s, disc_loss=0.16, gen_loss=5.24] Training Epoch 64/501: 100%|██████████| 98/98 [00:01<00:00, 54.33it/s, disc_loss=0.333, gen_loss=2.42] Training Epoch 65/501: 100%|██████████| 98/98 [00:01<00:00, 49.28it/s, disc_loss=0.128, gen_loss=3.94] Training Epoch 66/501: 100%|██████████| 98/98 [00:02<00:00, 44.90it/s, disc_loss=0.19, gen_loss=4.2] Training Epoch 67/501: 100%|██████████| 98/98 [00:02<00:00, 44.03it/s, disc_loss=0.255, gen_loss=3.16] Training Epoch 68/501: 100%|██████████| 98/98 [00:02<00:00, 47.60it/s, disc_loss=0.14, gen_loss=4.73] Training Epoch 69/501: 100%|██████████| 98/98 [00:02<00:00, 44.80it/s, disc_loss=0.228, gen_loss=5.37] Training Epoch 70/501: 100%|██████████| 98/98 [00:02<00:00, 43.69it/s, disc_loss=0.143, gen_loss=3.59] Training Epoch 71/501: 100%|██████████| 98/98 [00:02<00:00, 45.45it/s, disc_loss=0.162, gen_loss=3.9] Training Epoch 72/501: 100%|██████████| 98/98 [00:02<00:00, 47.38it/s, disc_loss=0.051, gen_loss=3.99] Training Epoch 73/501: 100%|██████████| 98/98 [00:02<00:00, 43.87it/s, disc_loss=0.102, gen_loss=2.45] Training Epoch 74/501: 100%|██████████| 98/98 [00:02<00:00, 46.71it/s, disc_loss=0.0536, gen_loss=4.69] Training Epoch 75/501: 100%|██████████| 98/98 [00:02<00:00, 41.32it/s, disc_loss=0.336, gen_loss=6.97] Training Epoch 76/501: 100%|██████████| 98/98 [00:02<00:00, 44.46it/s, disc_loss=0.2, gen_loss=2.74] Training Epoch 77/501: 100%|██████████| 98/98 [00:02<00:00, 45.67it/s, disc_loss=0.0748, gen_loss=5.44] Training Epoch 78/501: 100%|██████████| 98/98 [00:02<00:00, 44.00it/s, disc_loss=0.2, gen_loss=6.08] Training Epoch 79/501: 100%|██████████| 98/98 [00:02<00:00, 44.56it/s, disc_loss=0.054, gen_loss=4.95] Training Epoch 80/501: 100%|██████████| 98/98 [00:02<00:00, 43.78it/s, disc_loss=0.125, gen_loss=2.75] Training Epoch 81/501: 100%|██████████| 98/98 [00:02<00:00, 45.57it/s, disc_loss=0.171, gen_loss=5.17] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 54.90828323364258, KID: 0.03475838154554367
Training Epoch 82/501: 100%|██████████| 98/98 [00:02<00:00, 44.79it/s, disc_loss=0.187, gen_loss=2.85] Training Epoch 83/501: 100%|██████████| 98/98 [00:02<00:00, 45.98it/s, disc_loss=0.146, gen_loss=5.72] Training Epoch 84/501: 100%|██████████| 98/98 [00:02<00:00, 40.43it/s, disc_loss=0.171, gen_loss=4.34] Training Epoch 85/501: 100%|██████████| 98/98 [00:01<00:00, 51.21it/s, disc_loss=0.081, gen_loss=4.6] Training Epoch 86/501: 100%|██████████| 98/98 [00:01<00:00, 50.85it/s, disc_loss=0.168, gen_loss=3.55] Training Epoch 87/501: 100%|██████████| 98/98 [00:01<00:00, 51.49it/s, disc_loss=0.362, gen_loss=4.33] Training Epoch 88/501: 100%|██████████| 98/98 [00:01<00:00, 52.38it/s, disc_loss=0.0995, gen_loss=3.07] Training Epoch 89/501: 100%|██████████| 98/98 [00:01<00:00, 52.89it/s, disc_loss=0.259, gen_loss=3.32] Training Epoch 90/501: 100%|██████████| 98/98 [00:02<00:00, 44.69it/s, disc_loss=0.592, gen_loss=4.83] Training Epoch 91/501: 100%|██████████| 98/98 [00:02<00:00, 46.89it/s, disc_loss=0.126, gen_loss=3.52] Training Epoch 92/501: 100%|██████████| 98/98 [00:02<00:00, 41.25it/s, disc_loss=0.155, gen_loss=6.49] Training Epoch 93/501: 100%|██████████| 98/98 [00:02<00:00, 44.82it/s, disc_loss=0.0605, gen_loss=3.91] Training Epoch 94/501: 100%|██████████| 98/98 [00:02<00:00, 46.05it/s, disc_loss=0.0425, gen_loss=6.53] Training Epoch 95/501: 100%|██████████| 98/98 [00:02<00:00, 43.99it/s, disc_loss=0.201, gen_loss=4.79] Training Epoch 96/501: 100%|██████████| 98/98 [00:02<00:00, 45.66it/s, disc_loss=0.176, gen_loss=3.22] Training Epoch 97/501: 100%|██████████| 98/98 [00:02<00:00, 44.80it/s, disc_loss=0.116, gen_loss=3.2] Training Epoch 98/501: 100%|██████████| 98/98 [00:02<00:00, 45.98it/s, disc_loss=0.21, gen_loss=6.56] Training Epoch 99/501: 100%|██████████| 98/98 [00:02<00:00, 47.50it/s, disc_loss=0.247, gen_loss=5.31] Training Epoch 100/501: 100%|██████████| 98/98 [00:02<00:00, 46.07it/s, disc_loss=0.126, gen_loss=3.21] Training Epoch 101/501: 100%|██████████| 98/98 [00:02<00:00, 42.67it/s, disc_loss=0.0719, gen_loss=5.12] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 69.50497436523438, KID: 0.04874425008893013
Training Epoch 102/501: 100%|██████████| 98/98 [00:02<00:00, 44.40it/s, disc_loss=0.215, gen_loss=3.67] Training Epoch 103/501: 100%|██████████| 98/98 [00:01<00:00, 52.50it/s, disc_loss=0.095, gen_loss=4.66] Training Epoch 104/501: 100%|██████████| 98/98 [00:02<00:00, 46.22it/s, disc_loss=0.19, gen_loss=4.48] Training Epoch 105/501: 100%|██████████| 98/98 [00:01<00:00, 53.13it/s, disc_loss=0.0977, gen_loss=4.5] Training Epoch 106/501: 100%|██████████| 98/98 [00:02<00:00, 38.69it/s, disc_loss=0.292, gen_loss=0.948] Training Epoch 107/501: 100%|██████████| 98/98 [00:02<00:00, 46.69it/s, disc_loss=0.0448, gen_loss=7.18] Training Epoch 108/501: 100%|██████████| 98/98 [00:01<00:00, 50.31it/s, disc_loss=0.157, gen_loss=3.48] Training Epoch 109/501: 100%|██████████| 98/98 [00:01<00:00, 50.31it/s, disc_loss=0.141, gen_loss=3.36] Training Epoch 110/501: 100%|██████████| 98/98 [00:02<00:00, 47.57it/s, disc_loss=0.128, gen_loss=5.39] Training Epoch 111/501: 100%|██████████| 98/98 [00:01<00:00, 50.12it/s, disc_loss=0.0608, gen_loss=4.45] Training Epoch 112/501: 100%|██████████| 98/98 [00:02<00:00, 44.27it/s, disc_loss=0.0655, gen_loss=5.21] Training Epoch 113/501: 100%|██████████| 98/98 [00:02<00:00, 46.74it/s, disc_loss=0.28, gen_loss=5.45] Training Epoch 114/501: 100%|██████████| 98/98 [00:02<00:00, 43.88it/s, disc_loss=0.21, gen_loss=5.52] Training Epoch 115/501: 100%|██████████| 98/98 [00:02<00:00, 39.93it/s, disc_loss=0.0563, gen_loss=4.35] Training Epoch 116/501: 100%|██████████| 98/98 [00:01<00:00, 54.06it/s, disc_loss=0.23, gen_loss=7.8] Training Epoch 117/501: 100%|██████████| 98/98 [00:01<00:00, 49.18it/s, disc_loss=0.102, gen_loss=4.7] Training Epoch 118/501: 100%|██████████| 98/98 [00:02<00:00, 44.65it/s, disc_loss=0.185, gen_loss=5.61] Training Epoch 119/501: 100%|██████████| 98/98 [00:02<00:00, 44.42it/s, disc_loss=0.0511, gen_loss=4.69] Training Epoch 120/501: 100%|██████████| 98/98 [00:02<00:00, 45.28it/s, disc_loss=0.209, gen_loss=6.66] Training Epoch 121/501: 100%|██████████| 98/98 [00:01<00:00, 51.60it/s, disc_loss=0.173, gen_loss=2.6] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 57.43680191040039, KID: 0.030427144840359688
Training Epoch 122/501: 100%|██████████| 98/98 [00:02<00:00, 43.34it/s, disc_loss=0.155, gen_loss=3.65] Training Epoch 123/501: 100%|██████████| 98/98 [00:02<00:00, 39.61it/s, disc_loss=0.13, gen_loss=5.56] Training Epoch 124/501: 100%|██████████| 98/98 [00:02<00:00, 44.35it/s, disc_loss=0.0778, gen_loss=7.03] Training Epoch 125/501: 100%|██████████| 98/98 [00:01<00:00, 50.45it/s, disc_loss=0.0936, gen_loss=4.15] Training Epoch 126/501: 100%|██████████| 98/98 [00:02<00:00, 48.60it/s, disc_loss=0.107, gen_loss=3.84] Training Epoch 127/501: 100%|██████████| 98/98 [00:01<00:00, 50.28it/s, disc_loss=0.0753, gen_loss=3.5] Training Epoch 128/501: 100%|██████████| 98/98 [00:02<00:00, 46.33it/s, disc_loss=0.029, gen_loss=5.41] Training Epoch 129/501: 100%|██████████| 98/98 [00:01<00:00, 49.74it/s, disc_loss=0.103, gen_loss=4.37] Training Epoch 130/501: 100%|██████████| 98/98 [00:02<00:00, 44.67it/s, disc_loss=0.206, gen_loss=3.87] Training Epoch 131/501: 100%|██████████| 98/98 [00:02<00:00, 39.20it/s, disc_loss=0.101, gen_loss=3.52] Training Epoch 132/501: 100%|██████████| 98/98 [00:01<00:00, 50.36it/s, disc_loss=0.104, gen_loss=6.25] Training Epoch 133/501: 100%|██████████| 98/98 [00:02<00:00, 42.47it/s, disc_loss=0.13, gen_loss=3.16] Training Epoch 134/501: 100%|██████████| 98/98 [00:02<00:00, 45.49it/s, disc_loss=0.595, gen_loss=7.83] Training Epoch 135/501: 100%|██████████| 98/98 [00:02<00:00, 43.37it/s, disc_loss=0.221, gen_loss=6.75] Training Epoch 136/501: 100%|██████████| 98/98 [00:02<00:00, 42.02it/s, disc_loss=0.0206, gen_loss=6.06] Training Epoch 137/501: 100%|██████████| 98/98 [00:02<00:00, 43.80it/s, disc_loss=0.259, gen_loss=5.06] Training Epoch 138/501: 100%|██████████| 98/98 [00:02<00:00, 42.56it/s, disc_loss=0.131, gen_loss=3.76] Training Epoch 139/501: 100%|██████████| 98/98 [00:02<00:00, 38.44it/s, disc_loss=0.145, gen_loss=4.33] Training Epoch 140/501: 100%|██████████| 98/98 [00:02<00:00, 44.84it/s, disc_loss=0.0876, gen_loss=4.58] Training Epoch 141/501: 100%|██████████| 98/98 [00:02<00:00, 46.83it/s, disc_loss=0.0443, gen_loss=3.85] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 77.4580307006836, KID: 0.04872879758477211
Training Epoch 142/501: 100%|██████████| 98/98 [00:02<00:00, 47.78it/s, disc_loss=0.0764, gen_loss=2.73] Training Epoch 143/501: 100%|██████████| 98/98 [00:02<00:00, 47.74it/s, disc_loss=0.0992, gen_loss=5.77] Training Epoch 144/501: 100%|██████████| 98/98 [00:02<00:00, 43.72it/s, disc_loss=0.138, gen_loss=5.5] Training Epoch 145/501: 100%|██████████| 98/98 [00:02<00:00, 43.37it/s, disc_loss=0.109, gen_loss=6.78] Training Epoch 146/501: 100%|██████████| 98/98 [00:02<00:00, 44.90it/s, disc_loss=0.138, gen_loss=5.45] Training Epoch 147/501: 100%|██████████| 98/98 [00:02<00:00, 40.63it/s, disc_loss=0.029, gen_loss=5.16] Training Epoch 148/501: 100%|██████████| 98/98 [00:02<00:00, 43.99it/s, disc_loss=0.0808, gen_loss=6.27] Training Epoch 149/501: 100%|██████████| 98/98 [00:01<00:00, 50.59it/s, disc_loss=0.0769, gen_loss=5.67] Training Epoch 150/501: 100%|██████████| 98/98 [00:01<00:00, 52.07it/s, disc_loss=0.146, gen_loss=4.57] Training Epoch 151/501: 100%|██████████| 98/98 [00:02<00:00, 46.83it/s, disc_loss=0.106, gen_loss=5.52]
Training Epoch 152/501: 100%|██████████| 98/98 [00:02<00:00, 47.61it/s, disc_loss=0.112, gen_loss=4.87] Training Epoch 153/501: 100%|██████████| 98/98 [00:02<00:00, 39.52it/s, disc_loss=0.108, gen_loss=4.11] Training Epoch 154/501: 100%|██████████| 98/98 [00:02<00:00, 45.73it/s, disc_loss=0.398, gen_loss=10.2] Training Epoch 155/501: 100%|██████████| 98/98 [00:02<00:00, 44.47it/s, disc_loss=0.0472, gen_loss=8.22] Training Epoch 156/501: 100%|██████████| 98/98 [00:02<00:00, 45.20it/s, disc_loss=0.0735, gen_loss=4.66] Training Epoch 157/501: 100%|██████████| 98/98 [00:01<00:00, 55.17it/s, disc_loss=0.0397, gen_loss=5.15] Training Epoch 158/501: 100%|██████████| 98/98 [00:02<00:00, 44.86it/s, disc_loss=0.0881, gen_loss=3.7] Training Epoch 159/501: 100%|██████████| 98/98 [00:02<00:00, 42.42it/s, disc_loss=0.105, gen_loss=3.59] Training Epoch 160/501: 100%|██████████| 98/98 [00:02<00:00, 44.42it/s, disc_loss=0.115, gen_loss=3.18] Training Epoch 161/501: 100%|██████████| 98/98 [00:02<00:00, 40.57it/s, disc_loss=0.273, gen_loss=5.17] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 87.05989837646484, KID: 0.0632399395108223
Training Epoch 162/501: 100%|██████████| 98/98 [00:02<00:00, 48.52it/s, disc_loss=0.17, gen_loss=4.19] Training Epoch 163/501: 100%|██████████| 98/98 [00:02<00:00, 44.28it/s, disc_loss=0.337, gen_loss=4.84] Training Epoch 164/501: 100%|██████████| 98/98 [00:02<00:00, 45.49it/s, disc_loss=0.058, gen_loss=3.87] Training Epoch 165/501: 100%|██████████| 98/98 [00:01<00:00, 54.10it/s, disc_loss=0.108, gen_loss=4.68] Training Epoch 166/501: 100%|██████████| 98/98 [00:01<00:00, 53.28it/s, disc_loss=0.292, gen_loss=9.63] Training Epoch 167/501: 100%|██████████| 98/98 [00:02<00:00, 44.54it/s, disc_loss=0.321, gen_loss=9.36] Training Epoch 168/501: 100%|██████████| 98/98 [00:02<00:00, 44.73it/s, disc_loss=0.106, gen_loss=2.79] Training Epoch 169/501: 100%|██████████| 98/98 [00:02<00:00, 47.83it/s, disc_loss=0.0754, gen_loss=2.75] Training Epoch 170/501: 100%|██████████| 98/98 [00:02<00:00, 44.63it/s, disc_loss=0.131, gen_loss=5.32] Training Epoch 171/501: 100%|██████████| 98/98 [00:01<00:00, 49.80it/s, disc_loss=0.148, gen_loss=7.63] Training Epoch 172/501: 100%|██████████| 98/98 [00:01<00:00, 50.25it/s, disc_loss=0.0858, gen_loss=3.34] Training Epoch 173/501: 100%|██████████| 98/98 [00:02<00:00, 45.36it/s, disc_loss=0.0575, gen_loss=4.06] Training Epoch 174/501: 100%|██████████| 98/98 [00:02<00:00, 44.36it/s, disc_loss=0.295, gen_loss=6.03] Training Epoch 175/501: 100%|██████████| 98/98 [00:02<00:00, 43.81it/s, disc_loss=1.45, gen_loss=2.03] Training Epoch 176/501: 100%|██████████| 98/98 [00:02<00:00, 48.25it/s, disc_loss=0.0999, gen_loss=3.12] Training Epoch 177/501: 100%|██████████| 98/98 [00:02<00:00, 44.35it/s, disc_loss=0.106, gen_loss=5.46] Training Epoch 178/501: 100%|██████████| 98/98 [00:02<00:00, 41.28it/s, disc_loss=0.0707, gen_loss=4.48] Training Epoch 179/501: 100%|██████████| 98/98 [00:02<00:00, 43.74it/s, disc_loss=0.0929, gen_loss=3.67] Training Epoch 180/501: 100%|██████████| 98/98 [00:02<00:00, 44.72it/s, disc_loss=0.128, gen_loss=3.3] Training Epoch 181/501: 100%|██████████| 98/98 [00:02<00:00, 44.11it/s, disc_loss=0.229, gen_loss=7.01] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 62.692684173583984, KID: 0.031642768532037735
Training Epoch 182/501: 100%|██████████| 98/98 [00:02<00:00, 46.67it/s, disc_loss=0.138, gen_loss=4.95] Training Epoch 183/501: 100%|██████████| 98/98 [00:02<00:00, 42.70it/s, disc_loss=0.178, gen_loss=3.37] Training Epoch 184/501: 100%|██████████| 98/98 [00:02<00:00, 46.63it/s, disc_loss=0.0541, gen_loss=5.04] Training Epoch 185/501: 100%|██████████| 98/98 [00:02<00:00, 44.12it/s, disc_loss=0.471, gen_loss=10.5] Training Epoch 186/501: 100%|██████████| 98/98 [00:01<00:00, 49.50it/s, disc_loss=0.155, gen_loss=5.12] Training Epoch 187/501: 100%|██████████| 98/98 [00:02<00:00, 48.24it/s, disc_loss=0.0461, gen_loss=4.42] Training Epoch 188/501: 100%|██████████| 98/98 [00:02<00:00, 45.05it/s, disc_loss=0.0116, gen_loss=6.24] Training Epoch 189/501: 100%|██████████| 98/98 [00:02<00:00, 42.64it/s, disc_loss=0.0079, gen_loss=6.28] Training Epoch 190/501: 100%|██████████| 98/98 [00:02<00:00, 45.18it/s, disc_loss=0.00477, gen_loss=6.06] Training Epoch 191/501: 100%|██████████| 98/98 [00:02<00:00, 46.96it/s, disc_loss=0.00847, gen_loss=9.23] Training Epoch 192/501: 100%|██████████| 98/98 [00:02<00:00, 43.68it/s, disc_loss=0.0368, gen_loss=6.75] Training Epoch 193/501: 100%|██████████| 98/98 [00:02<00:00, 45.31it/s, disc_loss=0.0459, gen_loss=6.47] Training Epoch 194/501: 100%|██████████| 98/98 [00:02<00:00, 42.92it/s, disc_loss=0.0333, gen_loss=4.88] Training Epoch 195/501: 100%|██████████| 98/98 [00:02<00:00, 47.78it/s, disc_loss=0.00721, gen_loss=5.84] Training Epoch 196/501: 100%|██████████| 98/98 [00:02<00:00, 45.56it/s, disc_loss=0.0914, gen_loss=3.81] Training Epoch 197/501: 100%|██████████| 98/98 [00:02<00:00, 43.89it/s, disc_loss=0.0656, gen_loss=5.9] Training Epoch 198/501: 100%|██████████| 98/98 [00:02<00:00, 44.07it/s, disc_loss=0.0652, gen_loss=3.23] Training Epoch 199/501: 100%|██████████| 98/98 [00:02<00:00, 45.27it/s, disc_loss=0.0471, gen_loss=6.73] Training Epoch 200/501: 100%|██████████| 98/98 [00:02<00:00, 44.82it/s, disc_loss=0.0777, gen_loss=4.19] Training Epoch 201/501: 100%|██████████| 98/98 [00:02<00:00, 44.42it/s, disc_loss=0.0655, gen_loss=5.91] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 73.3487777709961, KID: 0.044249050319194794
Training Epoch 202/501: 100%|██████████| 98/98 [00:02<00:00, 44.90it/s, disc_loss=0.0962, gen_loss=5.86] Training Epoch 203/501: 100%|██████████| 98/98 [00:02<00:00, 42.94it/s, disc_loss=0.0962, gen_loss=3.78] Training Epoch 204/501: 100%|██████████| 98/98 [00:02<00:00, 42.19it/s, disc_loss=0.254, gen_loss=9.2] Training Epoch 205/501: 100%|██████████| 98/98 [00:02<00:00, 44.56it/s, disc_loss=0.198, gen_loss=4.25] Training Epoch 206/501: 100%|██████████| 98/98 [00:02<00:00, 44.78it/s, disc_loss=0.186, gen_loss=10.1] Training Epoch 207/501: 100%|██████████| 98/98 [00:02<00:00, 47.62it/s, disc_loss=0.0266, gen_loss=5.07] Training Epoch 208/501: 100%|██████████| 98/98 [00:02<00:00, 45.85it/s, disc_loss=0.0889, gen_loss=4.55] Training Epoch 209/501: 100%|██████████| 98/98 [00:02<00:00, 39.44it/s, disc_loss=0.959, gen_loss=2.44] Training Epoch 210/501: 100%|██████████| 98/98 [00:02<00:00, 42.15it/s, disc_loss=0.155, gen_loss=7.39] Training Epoch 211/501: 100%|██████████| 98/98 [00:02<00:00, 43.46it/s, disc_loss=0.204, gen_loss=4.51] Training Epoch 212/501: 100%|██████████| 98/98 [00:02<00:00, 42.89it/s, disc_loss=0.081, gen_loss=5.68] Training Epoch 213/501: 100%|██████████| 98/98 [00:02<00:00, 45.05it/s, disc_loss=0.086, gen_loss=5.36] Training Epoch 214/501: 100%|██████████| 98/98 [00:02<00:00, 46.75it/s, disc_loss=0.128, gen_loss=5.3] Training Epoch 215/501: 100%|██████████| 98/98 [00:02<00:00, 45.56it/s, disc_loss=0.0612, gen_loss=5.38] Training Epoch 216/501: 100%|██████████| 98/98 [00:02<00:00, 47.60it/s, disc_loss=0.134, gen_loss=5.1] Training Epoch 217/501: 100%|██████████| 98/98 [00:02<00:00, 45.18it/s, disc_loss=0.0374, gen_loss=4.52] Training Epoch 218/501: 100%|██████████| 98/98 [00:01<00:00, 50.28it/s, disc_loss=0.0922, gen_loss=3.25] Training Epoch 219/501: 100%|██████████| 98/98 [00:01<00:00, 49.72it/s, disc_loss=0.259, gen_loss=8.92] Training Epoch 220/501: 100%|██████████| 98/98 [00:02<00:00, 47.13it/s, disc_loss=0.0911, gen_loss=3.41] Training Epoch 221/501: 100%|██████████| 98/98 [00:01<00:00, 58.81it/s, disc_loss=0.145, gen_loss=3.11] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 68.39086151123047, KID: 0.035694729536771774
Training Epoch 222/501: 100%|██████████| 98/98 [00:01<00:00, 50.31it/s, disc_loss=0.0812, gen_loss=5.42] Training Epoch 223/501: 100%|██████████| 98/98 [00:02<00:00, 45.95it/s, disc_loss=0.141, gen_loss=1.3] Training Epoch 224/501: 100%|██████████| 98/98 [00:02<00:00, 47.89it/s, disc_loss=0.122, gen_loss=6.9] Training Epoch 225/501: 100%|██████████| 98/98 [00:02<00:00, 46.42it/s, disc_loss=0.0556, gen_loss=3.94] Training Epoch 226/501: 100%|██████████| 98/98 [00:02<00:00, 48.65it/s, disc_loss=0.374, gen_loss=9.84] Training Epoch 227/501: 100%|██████████| 98/98 [00:02<00:00, 48.40it/s, disc_loss=0.254, gen_loss=8.7] Training Epoch 228/501: 100%|██████████| 98/98 [00:02<00:00, 43.79it/s, disc_loss=0.07, gen_loss=7.13] Training Epoch 229/501: 100%|██████████| 98/98 [00:02<00:00, 45.15it/s, disc_loss=0.102, gen_loss=5.39] Training Epoch 230/501: 100%|██████████| 98/98 [00:02<00:00, 45.59it/s, disc_loss=0.238, gen_loss=8.99] Training Epoch 231/501: 100%|██████████| 98/98 [00:02<00:00, 45.18it/s, disc_loss=0.135, gen_loss=3.14] Training Epoch 232/501: 100%|██████████| 98/98 [00:02<00:00, 46.67it/s, disc_loss=0.0928, gen_loss=4.44] Training Epoch 233/501: 100%|██████████| 98/98 [00:02<00:00, 41.23it/s, disc_loss=0.0784, gen_loss=3.95] Training Epoch 234/501: 100%|██████████| 98/98 [00:02<00:00, 45.68it/s, disc_loss=0.483, gen_loss=9.15] Training Epoch 235/501: 100%|██████████| 98/98 [00:02<00:00, 44.97it/s, disc_loss=0.142, gen_loss=6.45] Training Epoch 236/501: 100%|██████████| 98/98 [00:02<00:00, 44.67it/s, disc_loss=0.0586, gen_loss=2.86] Training Epoch 237/501: 100%|██████████| 98/98 [00:02<00:00, 45.53it/s, disc_loss=0.111, gen_loss=5.35] Training Epoch 238/501: 100%|██████████| 98/98 [00:02<00:00, 44.62it/s, disc_loss=0.0285, gen_loss=5.05] Training Epoch 239/501: 100%|██████████| 98/98 [00:02<00:00, 46.59it/s, disc_loss=0.079, gen_loss=4.94] Training Epoch 240/501: 100%|██████████| 98/98 [00:02<00:00, 43.50it/s, disc_loss=0.154, gen_loss=5.87] Training Epoch 241/501: 100%|██████████| 98/98 [00:01<00:00, 52.95it/s, disc_loss=0.113, gen_loss=4.98] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 82.64505004882812, KID: 0.05528467893600464
Training Epoch 242/501: 100%|██████████| 98/98 [00:02<00:00, 44.10it/s, disc_loss=0.752, gen_loss=9.77] Training Epoch 243/501: 100%|██████████| 98/98 [00:01<00:00, 55.38it/s, disc_loss=0.237, gen_loss=3.01] Training Epoch 244/501: 100%|██████████| 98/98 [00:02<00:00, 45.02it/s, disc_loss=0.22, gen_loss=2.64] Training Epoch 245/501: 100%|██████████| 98/98 [00:02<00:00, 45.28it/s, disc_loss=0.0721, gen_loss=5.14] Training Epoch 246/501: 100%|██████████| 98/98 [00:02<00:00, 45.98it/s, disc_loss=0.276, gen_loss=4.69] Training Epoch 247/501: 100%|██████████| 98/98 [00:01<00:00, 49.90it/s, disc_loss=0.0869, gen_loss=6.86] Training Epoch 248/501: 100%|██████████| 98/98 [00:01<00:00, 49.22it/s, disc_loss=0.0863, gen_loss=4.72] Training Epoch 249/501: 100%|██████████| 98/98 [00:02<00:00, 48.85it/s, disc_loss=0.127, gen_loss=2.8] Training Epoch 250/501: 100%|██████████| 98/98 [00:02<00:00, 42.28it/s, disc_loss=1.37, gen_loss=7.45] Training Epoch 251/501: 100%|██████████| 98/98 [00:02<00:00, 45.17it/s, disc_loss=0.217, gen_loss=6.57]
Training Epoch 252/501: 100%|██████████| 98/98 [00:02<00:00, 46.67it/s, disc_loss=0.0595, gen_loss=5.54] Training Epoch 253/501: 100%|██████████| 98/98 [00:02<00:00, 45.35it/s, disc_loss=0.0206, gen_loss=4.72] Training Epoch 254/501: 100%|██████████| 98/98 [00:01<00:00, 54.69it/s, disc_loss=0.0594, gen_loss=4.21] Training Epoch 255/501: 100%|██████████| 98/98 [00:02<00:00, 44.88it/s, disc_loss=0.493, gen_loss=4.53] Training Epoch 256/501: 100%|██████████| 98/98 [00:02<00:00, 42.20it/s, disc_loss=0.0966, gen_loss=4.76] Training Epoch 257/501: 100%|██████████| 98/98 [00:02<00:00, 48.32it/s, disc_loss=0.173, gen_loss=5.81] Training Epoch 258/501: 100%|██████████| 98/98 [00:01<00:00, 49.10it/s, disc_loss=0.0766, gen_loss=5.41] Training Epoch 259/501: 100%|██████████| 98/98 [00:02<00:00, 45.09it/s, disc_loss=0.0641, gen_loss=4.53] Training Epoch 260/501: 100%|██████████| 98/98 [00:02<00:00, 45.34it/s, disc_loss=1.35, gen_loss=3.76] Training Epoch 261/501: 100%|██████████| 98/98 [00:02<00:00, 46.33it/s, disc_loss=0.0631, gen_loss=7.2] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 64.75682067871094, KID: 0.034916024655103683
Training Epoch 262/501: 100%|██████████| 98/98 [00:02<00:00, 44.05it/s, disc_loss=0.0895, gen_loss=3.33] Training Epoch 263/501: 100%|██████████| 98/98 [00:02<00:00, 44.17it/s, disc_loss=0.0494, gen_loss=4] Training Epoch 264/501: 100%|██████████| 98/98 [00:02<00:00, 38.56it/s, disc_loss=0.173, gen_loss=6.16] Training Epoch 265/501: 100%|██████████| 98/98 [00:02<00:00, 44.54it/s, disc_loss=0.22, gen_loss=4.37] Training Epoch 266/501: 100%|██████████| 98/98 [00:02<00:00, 46.75it/s, disc_loss=0.0856, gen_loss=4.18] Training Epoch 267/501: 100%|██████████| 98/98 [00:02<00:00, 43.13it/s, disc_loss=0.128, gen_loss=7.21] Training Epoch 268/501: 100%|██████████| 98/98 [00:02<00:00, 48.74it/s, disc_loss=0.103, gen_loss=3.99] Training Epoch 269/501: 100%|██████████| 98/98 [00:01<00:00, 50.29it/s, disc_loss=0.134, gen_loss=3.64] Training Epoch 270/501: 100%|██████████| 98/98 [00:01<00:00, 50.16it/s, disc_loss=0.156, gen_loss=2.7] Training Epoch 271/501: 100%|██████████| 98/98 [00:01<00:00, 50.53it/s, disc_loss=0.051, gen_loss=5.06] Training Epoch 272/501: 100%|██████████| 98/98 [00:02<00:00, 43.09it/s, disc_loss=0.113, gen_loss=4.17] Training Epoch 273/501: 100%|██████████| 98/98 [00:02<00:00, 43.95it/s, disc_loss=0.0235, gen_loss=7.55] Training Epoch 274/501: 100%|██████████| 98/98 [00:02<00:00, 44.34it/s, disc_loss=0.0143, gen_loss=5.5] Training Epoch 275/501: 100%|██████████| 98/98 [00:02<00:00, 48.18it/s, disc_loss=0.0302, gen_loss=4.14] Training Epoch 276/501: 100%|██████████| 98/98 [00:02<00:00, 44.05it/s, disc_loss=0.0278, gen_loss=4.94] Training Epoch 277/501: 100%|██████████| 98/98 [00:02<00:00, 48.74it/s, disc_loss=0.0577, gen_loss=6.09] Training Epoch 278/501: 100%|██████████| 98/98 [00:01<00:00, 49.74it/s, disc_loss=0.0637, gen_loss=6.39] Training Epoch 279/501: 100%|██████████| 98/98 [00:01<00:00, 56.48it/s, disc_loss=0.072, gen_loss=4.03] Training Epoch 280/501: 100%|██████████| 98/98 [00:02<00:00, 42.93it/s, disc_loss=0.0552, gen_loss=3.56] Training Epoch 281/501: 100%|██████████| 98/98 [00:02<00:00, 45.93it/s, disc_loss=0.0277, gen_loss=6.33] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 63.88471221923828, KID: 0.034934282302856445
Training Epoch 282/501: 100%|██████████| 98/98 [00:02<00:00, 41.99it/s, disc_loss=0.023, gen_loss=7.98] Training Epoch 283/501: 100%|██████████| 98/98 [00:02<00:00, 43.35it/s, disc_loss=0.168, gen_loss=8.97] Training Epoch 284/501: 100%|██████████| 98/98 [00:02<00:00, 44.14it/s, disc_loss=0.0455, gen_loss=5.58] Training Epoch 285/501: 100%|██████████| 98/98 [00:01<00:00, 49.53it/s, disc_loss=0.104, gen_loss=3.79] Training Epoch 286/501: 100%|██████████| 98/98 [00:01<00:00, 51.75it/s, disc_loss=0.0232, gen_loss=5.42] Training Epoch 287/501: 100%|██████████| 98/98 [00:01<00:00, 50.32it/s, disc_loss=0.0289, gen_loss=5.51] Training Epoch 288/501: 100%|██████████| 98/98 [00:02<00:00, 48.72it/s, disc_loss=0.027, gen_loss=5.21] Training Epoch 289/501: 100%|██████████| 98/98 [00:02<00:00, 40.32it/s, disc_loss=0.0593, gen_loss=7.38] Training Epoch 290/501: 100%|██████████| 98/98 [00:02<00:00, 46.08it/s, disc_loss=0.0688, gen_loss=4.78] Training Epoch 291/501: 100%|██████████| 98/98 [00:01<00:00, 61.33it/s, disc_loss=0.2, gen_loss=5.69] Training Epoch 292/501: 100%|██████████| 98/98 [00:01<00:00, 49.82it/s, disc_loss=0.0957, gen_loss=5.49] Training Epoch 293/501: 100%|██████████| 98/98 [00:01<00:00, 61.92it/s, disc_loss=0.141, gen_loss=4.27] Training Epoch 294/501: 100%|██████████| 98/98 [00:01<00:00, 52.69it/s, disc_loss=0.0893, gen_loss=6] Training Epoch 295/501: 100%|██████████| 98/98 [00:02<00:00, 44.28it/s, disc_loss=0.275, gen_loss=3.12] Training Epoch 296/501: 100%|██████████| 98/98 [00:02<00:00, 45.81it/s, disc_loss=0.012, gen_loss=5.72] Training Epoch 297/501: 100%|██████████| 98/98 [00:02<00:00, 39.78it/s, disc_loss=0.136, gen_loss=5.95] Training Epoch 298/501: 100%|██████████| 98/98 [00:02<00:00, 43.86it/s, disc_loss=0.0537, gen_loss=7.37] Training Epoch 299/501: 100%|██████████| 98/98 [00:01<00:00, 50.25it/s, disc_loss=0.0416, gen_loss=4.93] Training Epoch 300/501: 100%|██████████| 98/98 [00:01<00:00, 49.77it/s, disc_loss=1.34, gen_loss=8] Training Epoch 301/501: 100%|██████████| 98/98 [00:02<00:00, 45.98it/s, disc_loss=0.0509, gen_loss=5.28] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 69.01610565185547, KID: 0.03959473595023155
Training Epoch 302/501: 100%|██████████| 98/98 [00:02<00:00, 42.50it/s, disc_loss=0.475, gen_loss=0.428] Training Epoch 303/501: 100%|██████████| 98/98 [00:02<00:00, 38.74it/s, disc_loss=0.0712, gen_loss=8.1] Training Epoch 304/501: 100%|██████████| 98/98 [00:02<00:00, 46.27it/s, disc_loss=0.122, gen_loss=5.37] Training Epoch 305/501: 100%|██████████| 98/98 [00:02<00:00, 43.24it/s, disc_loss=0.119, gen_loss=4.12] Training Epoch 306/501: 100%|██████████| 98/98 [00:02<00:00, 46.64it/s, disc_loss=0.0419, gen_loss=5.3] Training Epoch 307/501: 100%|██████████| 98/98 [00:02<00:00, 42.31it/s, disc_loss=0.0261, gen_loss=5.06] Training Epoch 308/501: 100%|██████████| 98/98 [00:02<00:00, 41.99it/s, disc_loss=0.12, gen_loss=4.76] Training Epoch 309/501: 100%|██████████| 98/98 [00:02<00:00, 42.49it/s, disc_loss=0.15, gen_loss=4.62] Training Epoch 310/501: 100%|██████████| 98/98 [00:02<00:00, 44.67it/s, disc_loss=0.048, gen_loss=5.14] Training Epoch 311/501: 100%|██████████| 98/98 [00:02<00:00, 38.74it/s, disc_loss=0.122, gen_loss=12.4] Training Epoch 312/501: 100%|██████████| 98/98 [00:02<00:00, 44.89it/s, disc_loss=0.0788, gen_loss=5.85] Training Epoch 313/501: 100%|██████████| 98/98 [00:02<00:00, 48.20it/s, disc_loss=0.0485, gen_loss=5.72] Training Epoch 314/501: 100%|██████████| 98/98 [00:01<00:00, 49.59it/s, disc_loss=0.0526, gen_loss=4.82] Training Epoch 315/501: 100%|██████████| 98/98 [00:02<00:00, 43.14it/s, disc_loss=0.296, gen_loss=7.35] Training Epoch 316/501: 100%|██████████| 98/98 [00:02<00:00, 45.43it/s, disc_loss=0.0344, gen_loss=5.32] Training Epoch 317/501: 100%|██████████| 98/98 [00:02<00:00, 45.15it/s, disc_loss=0.00824, gen_loss=4.84] Training Epoch 318/501: 100%|██████████| 98/98 [00:01<00:00, 49.15it/s, disc_loss=0.0815, gen_loss=4.62] Training Epoch 319/501: 100%|██████████| 98/98 [00:02<00:00, 42.44it/s, disc_loss=0.0501, gen_loss=6.41] Training Epoch 320/501: 100%|██████████| 98/98 [00:02<00:00, 43.97it/s, disc_loss=0.0806, gen_loss=3.81] Training Epoch 321/501: 100%|██████████| 98/98 [00:02<00:00, 45.58it/s, disc_loss=0.28, gen_loss=12.3] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 79.71592712402344, KID: 0.045825861394405365
Training Epoch 322/501: 100%|██████████| 98/98 [00:02<00:00, 42.77it/s, disc_loss=0.109, gen_loss=5.55] Training Epoch 323/501: 100%|██████████| 98/98 [00:02<00:00, 48.73it/s, disc_loss=0.0924, gen_loss=7.44] Training Epoch 324/501: 100%|██████████| 98/98 [00:02<00:00, 43.45it/s, disc_loss=0.132, gen_loss=7.96] Training Epoch 325/501: 100%|██████████| 98/98 [00:02<00:00, 45.40it/s, disc_loss=0.0673, gen_loss=3.69] Training Epoch 326/501: 100%|██████████| 98/98 [00:01<00:00, 51.29it/s, disc_loss=0.154, gen_loss=9.28] Training Epoch 327/501: 100%|██████████| 98/98 [00:01<00:00, 50.59it/s, disc_loss=0.0463, gen_loss=4.65] Training Epoch 328/501: 100%|██████████| 98/98 [00:02<00:00, 42.18it/s, disc_loss=0.0288, gen_loss=6.55] Training Epoch 329/501: 100%|██████████| 98/98 [00:02<00:00, 48.67it/s, disc_loss=0.0655, gen_loss=5.54] Training Epoch 330/501: 100%|██████████| 98/98 [00:02<00:00, 44.56it/s, disc_loss=0.0899, gen_loss=5.79] Training Epoch 331/501: 100%|██████████| 98/98 [00:02<00:00, 44.89it/s, disc_loss=0.063, gen_loss=2.88] Training Epoch 332/501: 100%|██████████| 98/98 [00:02<00:00, 45.09it/s, disc_loss=0.214, gen_loss=2.35] Training Epoch 333/501: 100%|██████████| 98/98 [00:02<00:00, 43.70it/s, disc_loss=0.029, gen_loss=6.28] Training Epoch 334/501: 100%|██████████| 98/98 [00:02<00:00, 45.08it/s, disc_loss=0.0297, gen_loss=4.3] Training Epoch 335/501: 100%|██████████| 98/98 [00:02<00:00, 46.96it/s, disc_loss=0.013, gen_loss=6] Training Epoch 336/501: 100%|██████████| 98/98 [00:01<00:00, 52.45it/s, disc_loss=0.0541, gen_loss=5.73] Training Epoch 337/501: 100%|██████████| 98/98 [00:02<00:00, 40.32it/s, disc_loss=0.0216, gen_loss=6.34] Training Epoch 338/501: 100%|██████████| 98/98 [00:02<00:00, 44.25it/s, disc_loss=0.0328, gen_loss=6.19] Training Epoch 339/501: 100%|██████████| 98/98 [00:01<00:00, 53.88it/s, disc_loss=0.0795, gen_loss=3.33] Training Epoch 340/501: 100%|██████████| 98/98 [00:02<00:00, 45.49it/s, disc_loss=0.122, gen_loss=8.16] Training Epoch 341/501: 100%|██████████| 98/98 [00:02<00:00, 44.38it/s, disc_loss=0.0922, gen_loss=7.77] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 70.9151611328125, KID: 0.040683142840862274
Training Epoch 342/501: 100%|██████████| 98/98 [00:02<00:00, 42.81it/s, disc_loss=0.163, gen_loss=8.13] Training Epoch 343/501: 100%|██████████| 98/98 [00:02<00:00, 42.81it/s, disc_loss=0.125, gen_loss=4.32] Training Epoch 344/501: 100%|██████████| 98/98 [00:02<00:00, 42.36it/s, disc_loss=0.147, gen_loss=1.83] Training Epoch 345/501: 100%|██████████| 98/98 [00:02<00:00, 42.16it/s, disc_loss=0.14, gen_loss=7.05] Training Epoch 346/501: 100%|██████████| 98/98 [00:02<00:00, 39.48it/s, disc_loss=1.34, gen_loss=10.8] Training Epoch 347/501: 100%|██████████| 98/98 [00:02<00:00, 42.50it/s, disc_loss=0.215, gen_loss=1.4] Training Epoch 348/501: 100%|██████████| 98/98 [00:02<00:00, 42.79it/s, disc_loss=0.115, gen_loss=3.91] Training Epoch 349/501: 100%|██████████| 98/98 [00:02<00:00, 44.13it/s, disc_loss=0.0894, gen_loss=3.13] Training Epoch 350/501: 100%|██████████| 98/98 [00:02<00:00, 46.24it/s, disc_loss=0.137, gen_loss=3.41] Training Epoch 351/501: 100%|██████████| 98/98 [00:02<00:00, 43.15it/s, disc_loss=0.0727, gen_loss=5.18]
Training Epoch 352/501: 100%|██████████| 98/98 [00:02<00:00, 43.47it/s, disc_loss=0.0347, gen_loss=5.9] Training Epoch 353/501: 100%|██████████| 98/98 [00:02<00:00, 43.70it/s, disc_loss=0.179, gen_loss=5.52] Training Epoch 354/501: 100%|██████████| 98/98 [00:02<00:00, 42.69it/s, disc_loss=0.022, gen_loss=6.74] Training Epoch 355/501: 100%|██████████| 98/98 [00:02<00:00, 45.29it/s, disc_loss=0.0511, gen_loss=6.74] Training Epoch 356/501: 100%|██████████| 98/98 [00:02<00:00, 45.62it/s, disc_loss=0.416, gen_loss=0.345] Training Epoch 357/501: 100%|██████████| 98/98 [00:02<00:00, 43.10it/s, disc_loss=0.0376, gen_loss=6.23] Training Epoch 358/501: 100%|██████████| 98/98 [00:02<00:00, 43.45it/s, disc_loss=0.0477, gen_loss=8.29] Training Epoch 359/501: 100%|██████████| 98/98 [00:02<00:00, 48.78it/s, disc_loss=0.243, gen_loss=8.7] Training Epoch 360/501: 100%|██████████| 98/98 [00:02<00:00, 41.44it/s, disc_loss=0.0284, gen_loss=4.24] Training Epoch 361/501: 100%|██████████| 98/98 [00:02<00:00, 44.11it/s, disc_loss=0.0184, gen_loss=7.55] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 69.93022155761719, KID: 0.03374826908111572
Training Epoch 362/501: 100%|██████████| 98/98 [00:02<00:00, 43.64it/s, disc_loss=0.0855, gen_loss=4.71] Training Epoch 363/501: 100%|██████████| 98/98 [00:02<00:00, 46.18it/s, disc_loss=0.0964, gen_loss=4.5] Training Epoch 364/501: 100%|██████████| 98/98 [00:02<00:00, 44.19it/s, disc_loss=0.0142, gen_loss=4.36] Training Epoch 365/501: 100%|██████████| 98/98 [00:02<00:00, 45.11it/s, disc_loss=0.0179, gen_loss=6.7] Training Epoch 366/501: 100%|██████████| 98/98 [00:02<00:00, 46.65it/s, disc_loss=0.758, gen_loss=2.63] Training Epoch 367/501: 100%|██████████| 98/98 [00:02<00:00, 42.42it/s, disc_loss=0.0397, gen_loss=4.7] Training Epoch 368/501: 100%|██████████| 98/98 [00:02<00:00, 40.90it/s, disc_loss=0.0304, gen_loss=5.1] Training Epoch 369/501: 100%|██████████| 98/98 [00:02<00:00, 46.84it/s, disc_loss=0.0611, gen_loss=4.26] Training Epoch 370/501: 100%|██████████| 98/98 [00:02<00:00, 45.94it/s, disc_loss=0.0672, gen_loss=6.2] Training Epoch 371/501: 100%|██████████| 98/98 [00:02<00:00, 44.87it/s, disc_loss=0.0122, gen_loss=5.66] Training Epoch 372/501: 100%|██████████| 98/98 [00:02<00:00, 46.31it/s, disc_loss=0.115, gen_loss=10.7] Training Epoch 373/501: 100%|██████████| 98/98 [00:02<00:00, 48.21it/s, disc_loss=0.0309, gen_loss=12.3] Training Epoch 374/501: 100%|██████████| 98/98 [00:02<00:00, 48.55it/s, disc_loss=0.0926, gen_loss=4.75] Training Epoch 375/501: 100%|██████████| 98/98 [00:02<00:00, 48.35it/s, disc_loss=0.0418, gen_loss=5.66] Training Epoch 376/501: 100%|██████████| 98/98 [00:02<00:00, 44.93it/s, disc_loss=0.0822, gen_loss=4.12] Training Epoch 377/501: 100%|██████████| 98/98 [00:02<00:00, 40.40it/s, disc_loss=0.159, gen_loss=7.88] Training Epoch 378/501: 100%|██████████| 98/98 [00:02<00:00, 43.64it/s, disc_loss=0.024, gen_loss=6.73] Training Epoch 379/501: 100%|██████████| 98/98 [00:01<00:00, 50.03it/s, disc_loss=0.0143, gen_loss=5.75] Training Epoch 380/501: 100%|██████████| 98/98 [00:01<00:00, 50.36it/s, disc_loss=0.0217, gen_loss=5.47] Training Epoch 381/501: 100%|██████████| 98/98 [00:01<00:00, 49.97it/s, disc_loss=0.0393, gen_loss=4.94] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 68.84089660644531, KID: 0.03525520861148834
Training Epoch 382/501: 100%|██████████| 98/98 [00:01<00:00, 57.55it/s, disc_loss=0.0396, gen_loss=5.07] Training Epoch 383/501: 100%|██████████| 98/98 [00:01<00:00, 49.72it/s, disc_loss=0.0712, gen_loss=6.64] Training Epoch 384/501: 100%|██████████| 98/98 [00:01<00:00, 50.95it/s, disc_loss=0.0262, gen_loss=6.29] Training Epoch 385/501: 100%|██████████| 98/98 [00:02<00:00, 41.89it/s, disc_loss=0.0912, gen_loss=4.16] Training Epoch 386/501: 100%|██████████| 98/98 [00:02<00:00, 46.54it/s, disc_loss=0.0371, gen_loss=6.36] Training Epoch 387/501: 100%|██████████| 98/98 [00:02<00:00, 46.02it/s, disc_loss=0.0434, gen_loss=5.37] Training Epoch 388/501: 100%|██████████| 98/98 [00:02<00:00, 44.46it/s, disc_loss=0.0395, gen_loss=6.81] Training Epoch 389/501: 100%|██████████| 98/98 [00:02<00:00, 43.92it/s, disc_loss=0.111, gen_loss=8] Training Epoch 390/501: 100%|██████████| 98/98 [00:02<00:00, 43.99it/s, disc_loss=0.043, gen_loss=5.57] Training Epoch 391/501: 100%|██████████| 98/98 [00:02<00:00, 43.61it/s, disc_loss=0.0294, gen_loss=5.5] Training Epoch 392/501: 100%|██████████| 98/98 [00:02<00:00, 43.95it/s, disc_loss=0.213, gen_loss=0.726] Training Epoch 393/501: 100%|██████████| 98/98 [00:02<00:00, 46.45it/s, disc_loss=0.086, gen_loss=3.81] Training Epoch 394/501: 100%|██████████| 98/98 [00:02<00:00, 45.92it/s, disc_loss=0.0372, gen_loss=6.64] Training Epoch 395/501: 100%|██████████| 98/98 [00:01<00:00, 49.50it/s, disc_loss=0.0271, gen_loss=5.14] Training Epoch 396/501: 100%|██████████| 98/98 [00:02<00:00, 45.53it/s, disc_loss=0.0551, gen_loss=5.12] Training Epoch 397/501: 100%|██████████| 98/98 [00:02<00:00, 45.01it/s, disc_loss=0.0382, gen_loss=7.42] Training Epoch 398/501: 100%|██████████| 98/98 [00:02<00:00, 46.46it/s, disc_loss=0.147, gen_loss=1.67] Training Epoch 399/501: 100%|██████████| 98/98 [00:02<00:00, 44.68it/s, disc_loss=0.127, gen_loss=5.69] Training Epoch 400/501: 100%|██████████| 98/98 [00:02<00:00, 46.80it/s, disc_loss=0.435, gen_loss=4.98] Training Epoch 401/501: 100%|██████████| 98/98 [00:02<00:00, 44.29it/s, disc_loss=0.111, gen_loss=3.99] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 68.9583511352539, KID: 0.04055214673280716
Training Epoch 402/501: 100%|██████████| 98/98 [00:02<00:00, 48.26it/s, disc_loss=0.0654, gen_loss=5.23] Training Epoch 403/501: 100%|██████████| 98/98 [00:02<00:00, 45.97it/s, disc_loss=0.119, gen_loss=3.67] Training Epoch 404/501: 100%|██████████| 98/98 [00:02<00:00, 46.61it/s, disc_loss=0.217, gen_loss=3.29] Training Epoch 405/501: 100%|██████████| 98/98 [00:02<00:00, 45.65it/s, disc_loss=0.013, gen_loss=4.99] Training Epoch 406/501: 100%|██████████| 98/98 [00:02<00:00, 48.82it/s, disc_loss=0.0436, gen_loss=6.23] Training Epoch 407/501: 100%|██████████| 98/98 [00:02<00:00, 48.17it/s, disc_loss=0.0547, gen_loss=5.15] Training Epoch 408/501: 100%|██████████| 98/98 [00:02<00:00, 42.46it/s, disc_loss=0.102, gen_loss=5.9] Training Epoch 409/501: 100%|██████████| 98/98 [00:01<00:00, 50.30it/s, disc_loss=0.12, gen_loss=4.51] Training Epoch 410/501: 100%|██████████| 98/98 [00:02<00:00, 43.16it/s, disc_loss=0.0586, gen_loss=6.09] Training Epoch 411/501: 100%|██████████| 98/98 [00:01<00:00, 49.63it/s, disc_loss=0.101, gen_loss=3.38] Training Epoch 412/501: 100%|██████████| 98/98 [00:01<00:00, 50.79it/s, disc_loss=0.15, gen_loss=5.13] Training Epoch 413/501: 100%|██████████| 98/98 [00:01<00:00, 53.25it/s, disc_loss=0.0212, gen_loss=4.89] Training Epoch 414/501: 100%|██████████| 98/98 [00:01<00:00, 49.94it/s, disc_loss=0.0415, gen_loss=2.43] Training Epoch 415/501: 100%|██████████| 98/98 [00:02<00:00, 48.09it/s, disc_loss=0.141, gen_loss=2.18] Training Epoch 416/501: 100%|██████████| 98/98 [00:02<00:00, 43.39it/s, disc_loss=0.277, gen_loss=6.36] Training Epoch 417/501: 100%|██████████| 98/98 [00:02<00:00, 44.61it/s, disc_loss=0.0226, gen_loss=4.94] Training Epoch 418/501: 100%|██████████| 98/98 [00:02<00:00, 43.51it/s, disc_loss=0.104, gen_loss=9.7] Training Epoch 419/501: 100%|██████████| 98/98 [00:02<00:00, 44.31it/s, disc_loss=0.11, gen_loss=4.6] Training Epoch 420/501: 100%|██████████| 98/98 [00:02<00:00, 44.69it/s, disc_loss=0.0908, gen_loss=6.52] Training Epoch 421/501: 100%|██████████| 98/98 [00:02<00:00, 44.18it/s, disc_loss=0.0253, gen_loss=6.57] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 65.97594451904297, KID: 0.037072453647851944
Training Epoch 422/501: 100%|██████████| 98/98 [00:02<00:00, 42.77it/s, disc_loss=0.0506, gen_loss=5.13] Training Epoch 423/501: 100%|██████████| 98/98 [00:02<00:00, 44.15it/s, disc_loss=0.02, gen_loss=6.39] Training Epoch 424/501: 100%|██████████| 98/98 [00:02<00:00, 44.45it/s, disc_loss=0.0548, gen_loss=5.1] Training Epoch 425/501: 100%|██████████| 98/98 [00:02<00:00, 47.87it/s, disc_loss=0.00754, gen_loss=8.13] Training Epoch 426/501: 100%|██████████| 98/98 [00:02<00:00, 43.12it/s, disc_loss=0.069, gen_loss=6.98] Training Epoch 427/501: 100%|██████████| 98/98 [00:02<00:00, 42.89it/s, disc_loss=0.0358, gen_loss=2.85] Training Epoch 428/501: 100%|██████████| 98/98 [00:02<00:00, 42.55it/s, disc_loss=0.0347, gen_loss=6.46] Training Epoch 429/501: 100%|██████████| 98/98 [00:02<00:00, 44.46it/s, disc_loss=0.0943, gen_loss=4.45] Training Epoch 430/501: 100%|██████████| 98/98 [00:02<00:00, 43.74it/s, disc_loss=0.169, gen_loss=3.78] Training Epoch 431/501: 100%|██████████| 98/98 [00:02<00:00, 45.03it/s, disc_loss=0.134, gen_loss=3.58] Training Epoch 432/501: 100%|██████████| 98/98 [00:02<00:00, 39.22it/s, disc_loss=0.038, gen_loss=3.22] Training Epoch 433/501: 100%|██████████| 98/98 [00:02<00:00, 47.39it/s, disc_loss=0.0843, gen_loss=4.29] Training Epoch 434/501: 100%|██████████| 98/98 [00:01<00:00, 53.30it/s, disc_loss=0.011, gen_loss=8.71] Training Epoch 435/501: 100%|██████████| 98/98 [00:01<00:00, 64.22it/s, disc_loss=0.0101, gen_loss=5.62] Training Epoch 436/501: 100%|██████████| 98/98 [00:02<00:00, 44.84it/s, disc_loss=0.075, gen_loss=4.34] Training Epoch 437/501: 100%|██████████| 98/98 [00:02<00:00, 44.29it/s, disc_loss=0.0579, gen_loss=8.63] Training Epoch 438/501: 100%|██████████| 98/98 [00:02<00:00, 45.16it/s, disc_loss=0.0446, gen_loss=3.71] Training Epoch 439/501: 100%|██████████| 98/98 [00:02<00:00, 43.96it/s, disc_loss=0.137, gen_loss=2.06] Training Epoch 440/501: 100%|██████████| 98/98 [00:02<00:00, 48.05it/s, disc_loss=0.192, gen_loss=8.87] Training Epoch 441/501: 100%|██████████| 98/98 [00:02<00:00, 40.15it/s, disc_loss=0.0911, gen_loss=5.03] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 72.26649475097656, KID: 0.03805358707904816
Training Epoch 442/501: 100%|██████████| 98/98 [00:02<00:00, 43.76it/s, disc_loss=0.181, gen_loss=9.46] Training Epoch 443/501: 100%|██████████| 98/98 [00:02<00:00, 43.99it/s, disc_loss=0.0239, gen_loss=5.01] Training Epoch 444/501: 100%|██████████| 98/98 [00:02<00:00, 43.92it/s, disc_loss=0.0404, gen_loss=5.6] Training Epoch 445/501: 100%|██████████| 98/98 [00:02<00:00, 46.93it/s, disc_loss=0.0281, gen_loss=5.8] Training Epoch 446/501: 100%|██████████| 98/98 [00:02<00:00, 45.85it/s, disc_loss=0.15, gen_loss=3.59] Training Epoch 447/501: 100%|██████████| 98/98 [00:02<00:00, 48.00it/s, disc_loss=0.0187, gen_loss=3.84] Training Epoch 448/501: 100%|██████████| 98/98 [00:01<00:00, 52.40it/s, disc_loss=0.412, gen_loss=12.5] Training Epoch 449/501: 100%|██████████| 98/98 [00:02<00:00, 48.84it/s, disc_loss=0.0171, gen_loss=5.24] Training Epoch 450/501: 100%|██████████| 98/98 [00:02<00:00, 42.24it/s, disc_loss=0.0773, gen_loss=5.11] Training Epoch 451/501: 100%|██████████| 98/98 [00:02<00:00, 46.15it/s, disc_loss=0.359, gen_loss=0.975]
Training Epoch 452/501: 100%|██████████| 98/98 [00:02<00:00, 48.40it/s, disc_loss=0.0509, gen_loss=4.85] Training Epoch 453/501: 100%|██████████| 98/98 [00:02<00:00, 44.91it/s, disc_loss=0.0624, gen_loss=5.03] Training Epoch 454/501: 100%|██████████| 98/98 [00:01<00:00, 49.46it/s, disc_loss=0.00307, gen_loss=5.75] Training Epoch 455/501: 100%|██████████| 98/98 [00:01<00:00, 49.44it/s, disc_loss=0.0331, gen_loss=5.9] Training Epoch 456/501: 100%|██████████| 98/98 [00:02<00:00, 44.48it/s, disc_loss=0.0757, gen_loss=3.85] Training Epoch 457/501: 100%|██████████| 98/98 [00:01<00:00, 50.39it/s, disc_loss=0.00791, gen_loss=9.51] Training Epoch 458/501: 100%|██████████| 98/98 [00:01<00:00, 50.41it/s, disc_loss=0.0779, gen_loss=5.15] Training Epoch 459/501: 100%|██████████| 98/98 [00:01<00:00, 50.15it/s, disc_loss=0.133, gen_loss=3.86] Training Epoch 460/501: 100%|██████████| 98/98 [00:02<00:00, 44.81it/s, disc_loss=0.0053, gen_loss=3.54] Training Epoch 461/501: 100%|██████████| 98/98 [00:01<00:00, 50.10it/s, disc_loss=0.0996, gen_loss=2.37] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 63.31888961791992, KID: 0.030933653935790062
Training Epoch 462/501: 100%|██████████| 98/98 [00:02<00:00, 43.22it/s, disc_loss=0.017, gen_loss=6.59] Training Epoch 463/501: 100%|██████████| 98/98 [00:02<00:00, 44.34it/s, disc_loss=0.0141, gen_loss=5.66] Training Epoch 464/501: 100%|██████████| 98/98 [00:02<00:00, 44.98it/s, disc_loss=0.0203, gen_loss=6.46] Training Epoch 465/501: 100%|██████████| 98/98 [00:02<00:00, 44.11it/s, disc_loss=0.0383, gen_loss=5.15] Training Epoch 466/501: 100%|██████████| 98/98 [00:02<00:00, 40.45it/s, disc_loss=0.0307, gen_loss=9.77] Training Epoch 467/501: 100%|██████████| 98/98 [00:02<00:00, 43.78it/s, disc_loss=0.0337, gen_loss=4.8] Training Epoch 468/501: 100%|██████████| 98/98 [00:02<00:00, 44.34it/s, disc_loss=0.0982, gen_loss=3.24] Training Epoch 469/501: 100%|██████████| 98/98 [00:02<00:00, 44.07it/s, disc_loss=2.43, gen_loss=4.33] Training Epoch 470/501: 100%|██████████| 98/98 [00:02<00:00, 43.60it/s, disc_loss=0.173, gen_loss=2.18] Training Epoch 471/501: 100%|██████████| 98/98 [00:02<00:00, 43.37it/s, disc_loss=0.305, gen_loss=2.28] Training Epoch 472/501: 100%|██████████| 98/98 [00:02<00:00, 43.72it/s, disc_loss=0.00962, gen_loss=6.27] Training Epoch 473/501: 100%|██████████| 98/98 [00:02<00:00, 45.34it/s, disc_loss=0.0681, gen_loss=8.63] Training Epoch 474/501: 100%|██████████| 98/98 [00:02<00:00, 40.12it/s, disc_loss=0.00541, gen_loss=6.56] Training Epoch 475/501: 100%|██████████| 98/98 [00:02<00:00, 43.54it/s, disc_loss=0.0184, gen_loss=11.2] Training Epoch 476/501: 100%|██████████| 98/98 [00:02<00:00, 44.06it/s, disc_loss=0.061, gen_loss=4.81] Training Epoch 477/501: 100%|██████████| 98/98 [00:02<00:00, 44.62it/s, disc_loss=0.145, gen_loss=1.5] Training Epoch 478/501: 100%|██████████| 98/98 [00:02<00:00, 43.94it/s, disc_loss=0.106, gen_loss=4.01] Training Epoch 479/501: 100%|██████████| 98/98 [00:02<00:00, 44.32it/s, disc_loss=0.0646, gen_loss=5.55] Training Epoch 480/501: 100%|██████████| 98/98 [00:02<00:00, 46.06it/s, disc_loss=0.0318, gen_loss=4.51] Training Epoch 481/501: 100%|██████████| 98/98 [00:02<00:00, 45.20it/s, disc_loss=0.055, gen_loss=6.37] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 66.50039672851562, KID: 0.03389844670891762
Training Epoch 482/501: 100%|██████████| 98/98 [00:02<00:00, 47.02it/s, disc_loss=0.113, gen_loss=8.07] Training Epoch 483/501: 100%|██████████| 98/98 [00:02<00:00, 40.32it/s, disc_loss=0.0235, gen_loss=5.7] Training Epoch 484/501: 100%|██████████| 98/98 [00:02<00:00, 45.79it/s, disc_loss=0.0346, gen_loss=6.94] Training Epoch 485/501: 100%|██████████| 98/98 [00:02<00:00, 44.93it/s, disc_loss=0.12, gen_loss=6.03] Training Epoch 486/501: 100%|██████████| 98/98 [00:01<00:00, 50.57it/s, disc_loss=0.0373, gen_loss=5.07] Training Epoch 487/501: 100%|██████████| 98/98 [00:02<00:00, 46.42it/s, disc_loss=0.0198, gen_loss=6.51] Training Epoch 488/501: 100%|██████████| 98/98 [00:02<00:00, 44.54it/s, disc_loss=0.0556, gen_loss=7.94] Training Epoch 489/501: 100%|██████████| 98/98 [00:02<00:00, 44.79it/s, disc_loss=0.0317, gen_loss=5.14] Training Epoch 490/501: 100%|██████████| 98/98 [00:02<00:00, 44.71it/s, disc_loss=0.041, gen_loss=6.22] Training Epoch 491/501: 100%|██████████| 98/98 [00:02<00:00, 43.21it/s, disc_loss=0.0668, gen_loss=5.69] Training Epoch 492/501: 100%|██████████| 98/98 [00:02<00:00, 43.49it/s, disc_loss=0.418, gen_loss=0.33] Training Epoch 493/501: 100%|██████████| 98/98 [00:02<00:00, 44.45it/s, disc_loss=0.0272, gen_loss=5.25] Training Epoch 494/501: 100%|██████████| 98/98 [00:02<00:00, 45.20it/s, disc_loss=0.084, gen_loss=8.87] Training Epoch 495/501: 100%|██████████| 98/98 [00:02<00:00, 44.03it/s, disc_loss=0.0785, gen_loss=3.65] Training Epoch 496/501: 100%|██████████| 98/98 [00:01<00:00, 53.15it/s, disc_loss=0.0409, gen_loss=4.59] Training Epoch 497/501: 100%|██████████| 98/98 [00:02<00:00, 44.45it/s, disc_loss=0.572, gen_loss=6.1] Training Epoch 498/501: 100%|██████████| 98/98 [00:02<00:00, 48.93it/s, disc_loss=0.0345, gen_loss=4.82] Training Epoch 499/501: 100%|██████████| 98/98 [00:01<00:00, 51.49it/s, disc_loss=0.162, gen_loss=9.29] Training Epoch 500/501: 100%|██████████| 98/98 [00:02<00:00, 42.65it/s, disc_loss=1.77, gen_loss=7.42] Training Epoch 501/501: 100%|██████████| 98/98 [00:02<00:00, 43.74it/s, disc_loss=0.0546, gen_loss=5.99] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 75.39200592041016, KID: 0.03671291843056679
<Figure size 1200x600 with 0 Axes>
gen_loaded = torch.load(f"/workspace/cgan-500e-gen_1706658766.pt")
disc_loaded = torch.load(f"/workspace/cgan-500e-disc_1706658766.pt")
cgan = cGAN(gen_loaded,disc_loaded,train_loader)
Observations:¶
- The generated images exhibit decent poor quality and there are many aspects that could be improved. There are quite some artifacts present in the image most likely due to a overlap issue.
- Regardless of labels, model seems to output very similiar looking images which may suggest mode collapse.
- Throughout the model's training the generator loss seems to increase alot while the dicriminator loss decreased slowly. This could suggest that the discriminator is getting too strong and resulting in the generator not learning anything meaningful.
- Both the FID and KID are quite unstable reaching a low point around 50. This suggests the model is not improving due to the discriminator overpowering.
🎯 Engineering our Model for Excellence¶
Regularisation¶
To prevent the discriminator from overfitting, we will apply some simple regularisation techniques such as one sided label smoothing and R1 regularisation as discussed in this article. Later on we can also experiement with using DiffAugment, Feature matching and Minibatch discrimination if needed.
Label Smoothing¶
In a standard GAN setup, the discriminator is trained to distinguish between real and fake samples. The labels for real samples are set to 1, and those for fake samples are set to 0. However, in practice, this can lead to sharp decision boundaries, making the training more instable.
Label smoothing addresses these challenges by introducing a small amount of noise to the target labels. Instead of using the binary labels 1 and 0, label smoothing replaces them with values slightly less than 1 for real samples and slightly more than 0 for fake samples. A hyperparameter called smoothing factor determines the degree of smoothing that occurs.
This technique can be easily implemented by including an additoning parameter in our TorchGAN loss function.
R1 Regularization¶
R1 Regularization is a technique introduced in this paper to combat overfitting of the discriminator. It introduces a weight penalty term to the loss function that is proportional to the sum of all weights in the model.
The complete objective function during training is a combination of the original loss function added to the R1 regularization term:
$ \text{Total Loss} = \text{Original Loss} + R1(W) $
During optimization, the model aims to minimize this total loss, striking a balance between fitting the training data and preventing overfitting by penalizing large weights.
There are no official implementations of R1 loss but I did find a good implementation which I will use here with some modifications.
Avoiding Artifacts¶
Including pairs of upsampling and downsampling could improve our model. As mentioned in this article, simply using upsampling could lead to artifacts being present i images due to a overlap issue. Although the article suggests using resizing with nearest neigbours to upsample, we found our that this approach lead to poor results. So we didn't use it.
class R1(nn.Module):
"""
Implementation of the R1 GAN regularization.
"""
def __init__(self, gamma):
"""
Constructor method
"""
# Call super constructor
super(R1, self).__init__()
self.gamma= gamma
def forward(self, prediction_real: torch.Tensor, real_sample: torch.Tensor) -> torch.Tensor:
"""
Forward pass to compute the regularization
:param prediction_real: (torch.Tensor) Prediction of the discriminator for a batch of real images
:param real_sample: (torch.Tensor) Batch of the corresponding real images
:return: (torch.Tensor) Loss value
"""
#real_sample.requires_grad_(True)
# Calc gradient
grad_real = torch.autograd.grad(outputs=prediction_real.sum(), inputs=real_sample, create_graph=True)[0]
# Calc regularization
regularization_loss: torch.Tensor = self.gamma \
* grad_real.pow(2).view(grad_real.shape[0], -1).sum(1).mean()
return regularization_loss
def smooth_labels(labels, factor=0.2):
return labels * (1 - factor) + 0.5 * factor
# Generator architecture
class ResizeGenerator(nn.Module):
def __init__(self, latent_dim, hidden_dim):
super(ResizeGenerator, self).__init__()
self.hidden_dim = hidden_dim
self.latent_dim = latent_dim
self.input_layers = nn.Sequential(
nn.Linear(latent_dim + NUM_CLASS, hidden_dim),
nn.LeakyReLU(0.1, inplace=True)
)
self.conv_layers = nn.Sequential(
nn.ConvTranspose2d(int(hidden_dim/4), 128, kernel_size=4, stride=2, padding=1, bias=False),
nn.BatchNorm2d(128),
nn.LeakyReLU(True),
nn.ConvTranspose2d(128, 64, kernel_size=4, stride=2, padding=1, bias=False),
nn.BatchNorm2d(64),
nn.LeakyReLU(True),
nn.ConvTranspose2d(64, 32, kernel_size=4, stride=2, padding=1, bias=False),
nn.BatchNorm2d(32),
nn.LeakyReLU(True),
nn.ConvTranspose2d(32, CHANNELS, kernel_size=4, stride=2, padding=1, bias=False),
#nn.Upsample(size=32),
nn.BatchNorm2d(CHANNELS),
nn.LeakyReLU(True),
nn.Tanh()
)
def forward(self, noise, classes):
inputs = torch.cat((classes, noise), 1)
outputs = self.input_layers(inputs)
reshape_shape = int(self.hidden_dim/4)
outputs = torch.reshape(outputs, (outputs.size()[0], reshape_shape, 2, 2))
return self.conv_layers(outputs)
class R1GAN(cGAN):
def __init__(self, generator, discriminator, train_loader):
super().__init__(generator, discriminator, train_loader)
self.r1_loss = R1(0.2)
def disc_step(self,img,label):
self.d_opt.zero_grad()
img = img.to(device)
label = label.to(device)
img.requires_grad = True
noise = torch.normal(0, 1, (img.size()[0], self.generator.latent_dim), device=device)
fake_imgs = self.generator(noise, label)
fake_pred = self.discriminator(fake_imgs, label)
real_pred = self.discriminator(img, label)
fake_label = smooth_labels(torch.zeros((img.size()[0], 1), device=device))
real_label = smooth_labels(torch.ones((img.size()[0], 1), device=device))
r1_loss = self.r1_loss(real_pred, img)
d_loss = (self.loss(fake_pred, fake_label) + self.loss(real_pred, real_label)) / 2
d_loss = d_loss + r1_loss
self.d_opt.zero_grad()
d_loss.backward()
self.d_opt.step()
return d_loss.cpu().item()
gen_resize = ResizeGenerator(128,1024).to(device)
disc_simple = SimpleDiscriminator().to(device)
r1gan = R1GAN(gen_resize,disc_simple,train_loader)
r1gan.fit(501,train_loader)
plot_losses(501,[(cgan,"Simple"),(r1gan,"R1 & Smoothing")])
r1gan.save("r1-500e-v2")
torch.cuda.empty_cache()
Training R1GAN for 501 Epochs
Training Epoch 1/501: 100%|██████████| 98/98 [00:02<00:00, 41.11it/s, disc_loss=0.502, gen_loss=1.11]
100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 118.98226928710938, KID: 0.10694579780101776
Training Epoch 2/501: 100%|██████████| 98/98 [00:02<00:00, 37.23it/s, disc_loss=0.49, gen_loss=1.45] Training Epoch 3/501: 100%|██████████| 98/98 [00:02<00:00, 36.24it/s, disc_loss=0.593, gen_loss=1.68] Training Epoch 4/501: 100%|██████████| 98/98 [00:02<00:00, 38.05it/s, disc_loss=0.558, gen_loss=1.83] Training Epoch 5/501: 100%|██████████| 98/98 [00:02<00:00, 35.03it/s, disc_loss=0.479, gen_loss=1.62] Training Epoch 6/501: 100%|██████████| 98/98 [00:02<00:00, 41.97it/s, disc_loss=0.607, gen_loss=1.53] Training Epoch 7/501: 100%|██████████| 98/98 [00:02<00:00, 35.48it/s, disc_loss=0.549, gen_loss=1.39] Training Epoch 8/501: 100%|██████████| 98/98 [00:02<00:00, 36.34it/s, disc_loss=0.515, gen_loss=1.21] Training Epoch 9/501: 100%|██████████| 98/98 [00:02<00:00, 38.64it/s, disc_loss=0.516, gen_loss=2.93] Training Epoch 10/501: 100%|██████████| 98/98 [00:02<00:00, 39.01it/s, disc_loss=0.471, gen_loss=1.34] Training Epoch 11/501: 100%|██████████| 98/98 [00:02<00:00, 33.43it/s, disc_loss=0.456, gen_loss=1.63] Training Epoch 12/501: 100%|██████████| 98/98 [00:02<00:00, 36.61it/s, disc_loss=0.396, gen_loss=2.23] Training Epoch 13/501: 100%|██████████| 98/98 [00:02<00:00, 39.97it/s, disc_loss=0.447, gen_loss=2.65] Training Epoch 14/501: 100%|██████████| 98/98 [00:02<00:00, 35.22it/s, disc_loss=0.427, gen_loss=1.83] Training Epoch 15/501: 100%|██████████| 98/98 [00:02<00:00, 36.12it/s, disc_loss=0.471, gen_loss=1.44] Training Epoch 16/501: 100%|██████████| 98/98 [00:02<00:00, 34.83it/s, disc_loss=0.527, gen_loss=1.59] Training Epoch 17/501: 100%|██████████| 98/98 [00:02<00:00, 40.79it/s, disc_loss=0.474, gen_loss=2.06] Training Epoch 18/501: 100%|██████████| 98/98 [00:02<00:00, 38.78it/s, disc_loss=0.456, gen_loss=1.67] Training Epoch 19/501: 100%|██████████| 98/98 [00:02<00:00, 35.18it/s, disc_loss=0.542, gen_loss=1.87] Training Epoch 20/501: 100%|██████████| 98/98 [00:02<00:00, 36.93it/s, disc_loss=0.402, gen_loss=2.22] Training Epoch 21/501: 100%|██████████| 98/98 [00:02<00:00, 37.67it/s, disc_loss=0.383, gen_loss=2.23] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 65.17433166503906, KID: 0.056415949016809464
Training Epoch 22/501: 100%|██████████| 98/98 [00:02<00:00, 39.90it/s, disc_loss=0.395, gen_loss=2.29] Training Epoch 23/501: 100%|██████████| 98/98 [00:02<00:00, 41.31it/s, disc_loss=0.372, gen_loss=2.33] Training Epoch 24/501: 100%|██████████| 98/98 [00:02<00:00, 35.01it/s, disc_loss=0.548, gen_loss=1.15] Training Epoch 25/501: 100%|██████████| 98/98 [00:02<00:00, 41.22it/s, disc_loss=0.408, gen_loss=1.74] Training Epoch 26/501: 100%|██████████| 98/98 [00:02<00:00, 34.11it/s, disc_loss=0.502, gen_loss=1.89] Training Epoch 27/501: 100%|██████████| 98/98 [00:02<00:00, 34.63it/s, disc_loss=0.392, gen_loss=2.41] Training Epoch 28/501: 100%|██████████| 98/98 [00:03<00:00, 31.97it/s, disc_loss=0.414, gen_loss=1.25] Training Epoch 29/501: 100%|██████████| 98/98 [00:02<00:00, 36.42it/s, disc_loss=0.401, gen_loss=2.3] Training Epoch 30/501: 100%|██████████| 98/98 [00:02<00:00, 36.30it/s, disc_loss=0.411, gen_loss=1.77] Training Epoch 31/501: 100%|██████████| 98/98 [00:02<00:00, 37.55it/s, disc_loss=0.412, gen_loss=3.01] Training Epoch 32/501: 100%|██████████| 98/98 [00:02<00:00, 36.86it/s, disc_loss=0.394, gen_loss=2.32] Training Epoch 33/501: 100%|██████████| 98/98 [00:02<00:00, 41.24it/s, disc_loss=0.388, gen_loss=2.71] Training Epoch 34/501: 100%|██████████| 98/98 [00:02<00:00, 42.53it/s, disc_loss=0.382, gen_loss=2.4] Training Epoch 35/501: 100%|██████████| 98/98 [00:02<00:00, 35.16it/s, disc_loss=0.386, gen_loss=1.7] Training Epoch 36/501: 100%|██████████| 98/98 [00:03<00:00, 31.65it/s, disc_loss=0.373, gen_loss=2.42] Training Epoch 37/501: 100%|██████████| 98/98 [00:02<00:00, 34.72it/s, disc_loss=0.405, gen_loss=2.92] Training Epoch 38/501: 100%|██████████| 98/98 [00:02<00:00, 36.67it/s, disc_loss=0.38, gen_loss=2.6] Training Epoch 39/501: 100%|██████████| 98/98 [00:02<00:00, 39.41it/s, disc_loss=0.43, gen_loss=1.53] Training Epoch 40/501: 100%|██████████| 98/98 [00:02<00:00, 38.20it/s, disc_loss=0.653, gen_loss=1.73] Training Epoch 41/501: 100%|██████████| 98/98 [00:02<00:00, 41.68it/s, disc_loss=0.423, gen_loss=2.2] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 69.68119812011719, KID: 0.0629037469625473
Training Epoch 42/501: 100%|██████████| 98/98 [00:02<00:00, 36.19it/s, disc_loss=0.385, gen_loss=1.87] Training Epoch 43/501: 100%|██████████| 98/98 [00:02<00:00, 36.77it/s, disc_loss=0.383, gen_loss=2.23] Training Epoch 44/501: 100%|██████████| 98/98 [00:02<00:00, 35.16it/s, disc_loss=0.375, gen_loss=1.91] Training Epoch 45/501: 100%|██████████| 98/98 [00:02<00:00, 34.67it/s, disc_loss=0.361, gen_loss=2.25] Training Epoch 46/501: 100%|██████████| 98/98 [00:02<00:00, 36.15it/s, disc_loss=0.442, gen_loss=1.89] Training Epoch 47/501: 100%|██████████| 98/98 [00:02<00:00, 36.66it/s, disc_loss=0.548, gen_loss=3.88] Training Epoch 48/501: 100%|██████████| 98/98 [00:02<00:00, 36.87it/s, disc_loss=0.367, gen_loss=1.94] Training Epoch 49/501: 100%|██████████| 98/98 [00:02<00:00, 39.40it/s, disc_loss=0.36, gen_loss=2.31] Training Epoch 50/501: 100%|██████████| 98/98 [00:02<00:00, 41.08it/s, disc_loss=0.404, gen_loss=1.76] Training Epoch 51/501: 100%|██████████| 98/98 [00:02<00:00, 42.11it/s, disc_loss=0.379, gen_loss=1.69]
Training Epoch 52/501: 100%|██████████| 98/98 [00:02<00:00, 32.78it/s, disc_loss=0.392, gen_loss=2.7] Training Epoch 53/501: 100%|██████████| 98/98 [00:02<00:00, 35.61it/s, disc_loss=0.405, gen_loss=1.48] Training Epoch 54/501: 100%|██████████| 98/98 [00:02<00:00, 37.89it/s, disc_loss=0.375, gen_loss=1.77] Training Epoch 55/501: 100%|██████████| 98/98 [00:02<00:00, 39.86it/s, disc_loss=0.405, gen_loss=1.56] Training Epoch 56/501: 100%|██████████| 98/98 [00:02<00:00, 35.51it/s, disc_loss=0.379, gen_loss=1.98] Training Epoch 57/501: 100%|██████████| 98/98 [00:02<00:00, 38.50it/s, disc_loss=0.39, gen_loss=1.64] Training Epoch 58/501: 100%|██████████| 98/98 [00:02<00:00, 37.22it/s, disc_loss=0.411, gen_loss=3.1] Training Epoch 59/501: 100%|██████████| 98/98 [00:02<00:00, 37.75it/s, disc_loss=0.368, gen_loss=1.73] Training Epoch 60/501: 100%|██████████| 98/98 [00:02<00:00, 33.13it/s, disc_loss=0.488, gen_loss=2.16] Training Epoch 61/501: 100%|██████████| 98/98 [00:02<00:00, 38.66it/s, disc_loss=0.404, gen_loss=2.08] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 64.255859375, KID: 0.0571897029876709
Training Epoch 62/501: 100%|██████████| 98/98 [00:02<00:00, 40.64it/s, disc_loss=0.423, gen_loss=2.73] Training Epoch 63/501: 100%|██████████| 98/98 [00:02<00:00, 36.17it/s, disc_loss=0.37, gen_loss=1.71] Training Epoch 64/501: 100%|██████████| 98/98 [00:02<00:00, 41.27it/s, disc_loss=0.386, gen_loss=2.31] Training Epoch 65/501: 100%|██████████| 98/98 [00:02<00:00, 38.02it/s, disc_loss=0.523, gen_loss=2.07] Training Epoch 66/501: 100%|██████████| 98/98 [00:02<00:00, 35.61it/s, disc_loss=0.365, gen_loss=2.38] Training Epoch 67/501: 100%|██████████| 98/98 [00:02<00:00, 34.91it/s, disc_loss=0.438, gen_loss=1.79] Training Epoch 68/501: 100%|██████████| 98/98 [00:02<00:00, 35.39it/s, disc_loss=0.441, gen_loss=3.13] Training Epoch 69/501: 100%|██████████| 98/98 [00:02<00:00, 37.21it/s, disc_loss=0.41, gen_loss=2.08] Training Epoch 70/501: 100%|██████████| 98/98 [00:02<00:00, 37.16it/s, disc_loss=0.37, gen_loss=2.93] Training Epoch 71/501: 100%|██████████| 98/98 [00:02<00:00, 35.23it/s, disc_loss=0.383, gen_loss=2.38] Training Epoch 72/501: 100%|██████████| 98/98 [00:02<00:00, 35.65it/s, disc_loss=0.398, gen_loss=1.88] Training Epoch 73/501: 100%|██████████| 98/98 [00:02<00:00, 38.60it/s, disc_loss=0.382, gen_loss=3.31] Training Epoch 74/501: 100%|██████████| 98/98 [00:02<00:00, 40.75it/s, disc_loss=0.392, gen_loss=2.74] Training Epoch 75/501: 100%|██████████| 98/98 [00:02<00:00, 35.57it/s, disc_loss=0.388, gen_loss=1.9] Training Epoch 76/501: 100%|██████████| 98/98 [00:02<00:00, 35.42it/s, disc_loss=0.38, gen_loss=2.05] Training Epoch 77/501: 100%|██████████| 98/98 [00:02<00:00, 37.78it/s, disc_loss=0.379, gen_loss=2.71] Training Epoch 78/501: 100%|██████████| 98/98 [00:02<00:00, 33.42it/s, disc_loss=0.367, gen_loss=2.38] Training Epoch 79/501: 100%|██████████| 98/98 [00:02<00:00, 35.24it/s, disc_loss=0.372, gen_loss=2.29] Training Epoch 80/501: 100%|██████████| 98/98 [00:02<00:00, 36.06it/s, disc_loss=0.403, gen_loss=2.36] Training Epoch 81/501: 100%|██████████| 98/98 [00:02<00:00, 35.65it/s, disc_loss=0.373, gen_loss=1.89] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 54.158607482910156, KID: 0.046451032161712646
Training Epoch 82/501: 100%|██████████| 98/98 [00:02<00:00, 37.88it/s, disc_loss=0.387, gen_loss=2.79] Training Epoch 83/501: 100%|██████████| 98/98 [00:02<00:00, 35.08it/s, disc_loss=0.443, gen_loss=2.61] Training Epoch 84/501: 100%|██████████| 98/98 [00:02<00:00, 35.66it/s, disc_loss=0.384, gen_loss=2.11] Training Epoch 85/501: 100%|██████████| 98/98 [00:02<00:00, 35.83it/s, disc_loss=0.38, gen_loss=2.26] Training Epoch 86/501: 100%|██████████| 98/98 [00:02<00:00, 35.44it/s, disc_loss=0.361, gen_loss=2.82] Training Epoch 87/501: 100%|██████████| 98/98 [00:02<00:00, 32.84it/s, disc_loss=0.375, gen_loss=2.06] Training Epoch 88/501: 100%|██████████| 98/98 [00:02<00:00, 35.26it/s, disc_loss=0.392, gen_loss=1.88] Training Epoch 89/501: 100%|██████████| 98/98 [00:02<00:00, 38.03it/s, disc_loss=0.385, gen_loss=2.55] Training Epoch 90/501: 100%|██████████| 98/98 [00:02<00:00, 37.20it/s, disc_loss=0.37, gen_loss=2.53] Training Epoch 91/501: 100%|██████████| 98/98 [00:02<00:00, 36.96it/s, disc_loss=0.496, gen_loss=4.42] Training Epoch 92/501: 100%|██████████| 98/98 [00:02<00:00, 39.82it/s, disc_loss=0.36, gen_loss=2] Training Epoch 93/501: 100%|██████████| 98/98 [00:02<00:00, 36.60it/s, disc_loss=0.377, gen_loss=2.68] Training Epoch 94/501: 100%|██████████| 98/98 [00:02<00:00, 37.31it/s, disc_loss=0.442, gen_loss=3.62] Training Epoch 95/501: 100%|██████████| 98/98 [00:02<00:00, 36.55it/s, disc_loss=0.373, gen_loss=3.17] Training Epoch 96/501: 100%|██████████| 98/98 [00:02<00:00, 33.30it/s, disc_loss=0.415, gen_loss=1.88] Training Epoch 97/501: 100%|██████████| 98/98 [00:02<00:00, 34.91it/s, disc_loss=0.353, gen_loss=2.25] Training Epoch 98/501: 100%|██████████| 98/98 [00:02<00:00, 37.36it/s, disc_loss=0.383, gen_loss=2.38] Training Epoch 99/501: 100%|██████████| 98/98 [00:02<00:00, 36.43it/s, disc_loss=0.386, gen_loss=2.99] Training Epoch 100/501: 100%|██████████| 98/98 [00:02<00:00, 36.51it/s, disc_loss=0.395, gen_loss=1.91] Training Epoch 101/501: 100%|██████████| 98/98 [00:02<00:00, 38.50it/s, disc_loss=0.371, gen_loss=2.49] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 50.88734436035156, KID: 0.0472293384373188
Training Epoch 102/501: 100%|██████████| 98/98 [00:03<00:00, 31.09it/s, disc_loss=0.438, gen_loss=1.96] Training Epoch 103/501: 100%|██████████| 98/98 [00:02<00:00, 35.56it/s, disc_loss=0.365, gen_loss=1.97] Training Epoch 104/501: 100%|██████████| 98/98 [00:02<00:00, 34.68it/s, disc_loss=0.359, gen_loss=1.95] Training Epoch 105/501: 100%|██████████| 98/98 [00:02<00:00, 35.28it/s, disc_loss=0.505, gen_loss=2.53] Training Epoch 106/501: 100%|██████████| 98/98 [00:02<00:00, 35.52it/s, disc_loss=0.454, gen_loss=3.59] Training Epoch 107/501: 100%|██████████| 98/98 [00:02<00:00, 37.72it/s, disc_loss=0.398, gen_loss=2.46] Training Epoch 108/501: 100%|██████████| 98/98 [00:02<00:00, 39.70it/s, disc_loss=0.364, gen_loss=2.01] Training Epoch 109/501: 100%|██████████| 98/98 [00:02<00:00, 37.95it/s, disc_loss=0.446, gen_loss=1.73] Training Epoch 110/501: 100%|██████████| 98/98 [00:02<00:00, 38.07it/s, disc_loss=0.377, gen_loss=2.12] Training Epoch 111/501: 100%|██████████| 98/98 [00:02<00:00, 34.44it/s, disc_loss=0.392, gen_loss=2.64] Training Epoch 112/501: 100%|██████████| 98/98 [00:02<00:00, 41.18it/s, disc_loss=0.428, gen_loss=1.49] Training Epoch 113/501: 100%|██████████| 98/98 [00:02<00:00, 35.39it/s, disc_loss=0.387, gen_loss=2.11] Training Epoch 114/501: 100%|██████████| 98/98 [00:02<00:00, 41.64it/s, disc_loss=0.406, gen_loss=2] Training Epoch 115/501: 100%|██████████| 98/98 [00:02<00:00, 37.39it/s, disc_loss=0.37, gen_loss=2.34] Training Epoch 116/501: 100%|██████████| 98/98 [00:02<00:00, 38.74it/s, disc_loss=0.387, gen_loss=2.97] Training Epoch 117/501: 100%|██████████| 98/98 [00:02<00:00, 36.34it/s, disc_loss=0.468, gen_loss=1.04] Training Epoch 118/501: 100%|██████████| 98/98 [00:02<00:00, 36.90it/s, disc_loss=0.54, gen_loss=4.28] Training Epoch 119/501: 100%|██████████| 98/98 [00:02<00:00, 37.30it/s, disc_loss=0.364, gen_loss=2.83] Training Epoch 120/501: 100%|██████████| 98/98 [00:02<00:00, 34.81it/s, disc_loss=0.384, gen_loss=2.07] Training Epoch 121/501: 100%|██████████| 98/98 [00:02<00:00, 36.44it/s, disc_loss=0.441, gen_loss=1.42] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 54.175872802734375, KID: 0.04654587805271149
Training Epoch 122/501: 100%|██████████| 98/98 [00:02<00:00, 40.35it/s, disc_loss=0.375, gen_loss=2.63] Training Epoch 123/501: 100%|██████████| 98/98 [00:02<00:00, 37.67it/s, disc_loss=0.359, gen_loss=2.02] Training Epoch 124/501: 100%|██████████| 98/98 [00:02<00:00, 36.26it/s, disc_loss=0.383, gen_loss=2.39] Training Epoch 125/501: 100%|██████████| 98/98 [00:02<00:00, 34.94it/s, disc_loss=0.374, gen_loss=1.83] Training Epoch 126/501: 100%|██████████| 98/98 [00:02<00:00, 40.09it/s, disc_loss=0.374, gen_loss=2.63] Training Epoch 127/501: 100%|██████████| 98/98 [00:02<00:00, 38.38it/s, disc_loss=0.371, gen_loss=2.35] Training Epoch 128/501: 100%|██████████| 98/98 [00:02<00:00, 36.50it/s, disc_loss=0.355, gen_loss=2.26] Training Epoch 129/501: 100%|██████████| 98/98 [00:02<00:00, 38.72it/s, disc_loss=0.429, gen_loss=3.27] Training Epoch 130/501: 100%|██████████| 98/98 [00:02<00:00, 35.61it/s, disc_loss=0.41, gen_loss=1.84] Training Epoch 131/501: 100%|██████████| 98/98 [00:02<00:00, 37.35it/s, disc_loss=0.398, gen_loss=2.98] Training Epoch 132/501: 100%|██████████| 98/98 [00:02<00:00, 35.71it/s, disc_loss=0.364, gen_loss=2.02] Training Epoch 133/501: 100%|██████████| 98/98 [00:02<00:00, 34.19it/s, disc_loss=0.452, gen_loss=3.53] Training Epoch 134/501: 100%|██████████| 98/98 [00:02<00:00, 33.83it/s, disc_loss=0.37, gen_loss=2.64] Training Epoch 135/501: 100%|██████████| 98/98 [00:02<00:00, 33.86it/s, disc_loss=0.411, gen_loss=1.69] Training Epoch 136/501: 100%|██████████| 98/98 [00:02<00:00, 34.63it/s, disc_loss=0.413, gen_loss=1.64] Training Epoch 137/501: 100%|██████████| 98/98 [00:02<00:00, 37.65it/s, disc_loss=0.376, gen_loss=2.27] Training Epoch 138/501: 100%|██████████| 98/98 [00:02<00:00, 36.03it/s, disc_loss=0.384, gen_loss=3.25] Training Epoch 139/501: 100%|██████████| 98/98 [00:03<00:00, 32.10it/s, disc_loss=0.374, gen_loss=2.27] Training Epoch 140/501: 100%|██████████| 98/98 [00:02<00:00, 39.03it/s, disc_loss=0.358, gen_loss=1.98] Training Epoch 141/501: 100%|██████████| 98/98 [00:02<00:00, 36.58it/s, disc_loss=0.404, gen_loss=2.99] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 54.470558166503906, KID: 0.04310765117406845
Training Epoch 142/501: 100%|██████████| 98/98 [00:02<00:00, 38.08it/s, disc_loss=0.409, gen_loss=2.85] Training Epoch 143/501: 100%|██████████| 98/98 [00:02<00:00, 35.54it/s, disc_loss=0.384, gen_loss=2.71] Training Epoch 144/501: 100%|██████████| 98/98 [00:02<00:00, 37.00it/s, disc_loss=0.401, gen_loss=1.72] Training Epoch 145/501: 100%|██████████| 98/98 [00:02<00:00, 36.94it/s, disc_loss=0.361, gen_loss=2.31] Training Epoch 146/501: 100%|██████████| 98/98 [00:02<00:00, 38.28it/s, disc_loss=0.38, gen_loss=2.9] Training Epoch 147/501: 100%|██████████| 98/98 [00:02<00:00, 36.77it/s, disc_loss=0.354, gen_loss=2.61] Training Epoch 148/501: 100%|██████████| 98/98 [00:02<00:00, 38.49it/s, disc_loss=0.374, gen_loss=2.61] Training Epoch 149/501: 100%|██████████| 98/98 [00:02<00:00, 36.08it/s, disc_loss=0.373, gen_loss=2.39] Training Epoch 150/501: 100%|██████████| 98/98 [00:02<00:00, 36.03it/s, disc_loss=0.426, gen_loss=3.12] Training Epoch 151/501: 100%|██████████| 98/98 [00:02<00:00, 40.12it/s, disc_loss=0.411, gen_loss=2.75]
Training Epoch 152/501: 100%|██████████| 98/98 [00:02<00:00, 35.52it/s, disc_loss=0.388, gen_loss=2.35] Training Epoch 153/501: 100%|██████████| 98/98 [00:02<00:00, 34.91it/s, disc_loss=0.367, gen_loss=2.44] Training Epoch 154/501: 100%|██████████| 98/98 [00:02<00:00, 32.68it/s, disc_loss=0.358, gen_loss=2.4] Training Epoch 155/501: 100%|██████████| 98/98 [00:02<00:00, 36.20it/s, disc_loss=0.403, gen_loss=2.54] Training Epoch 156/501: 100%|██████████| 98/98 [00:02<00:00, 35.06it/s, disc_loss=0.412, gen_loss=1.63] Training Epoch 157/501: 100%|██████████| 98/98 [00:02<00:00, 36.23it/s, disc_loss=0.453, gen_loss=1.38] Training Epoch 158/501: 100%|██████████| 98/98 [00:02<00:00, 38.55it/s, disc_loss=0.389, gen_loss=1.8] Training Epoch 159/501: 100%|██████████| 98/98 [00:02<00:00, 35.15it/s, disc_loss=0.467, gen_loss=3.11] Training Epoch 160/501: 100%|██████████| 98/98 [00:02<00:00, 34.70it/s, disc_loss=0.401, gen_loss=3.01] Training Epoch 161/501: 100%|██████████| 98/98 [00:02<00:00, 36.50it/s, disc_loss=0.39, gen_loss=2.24] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 55.74485778808594, KID: 0.04580831900238991
Training Epoch 162/501: 100%|██████████| 98/98 [00:02<00:00, 34.69it/s, disc_loss=0.38, gen_loss=2.5] Training Epoch 163/501: 100%|██████████| 98/98 [00:02<00:00, 32.73it/s, disc_loss=0.393, gen_loss=3.26] Training Epoch 164/501: 100%|██████████| 98/98 [00:02<00:00, 34.63it/s, disc_loss=0.438, gen_loss=2.52] Training Epoch 165/501: 100%|██████████| 98/98 [00:02<00:00, 40.58it/s, disc_loss=0.372, gen_loss=2.57] Training Epoch 166/501: 100%|██████████| 98/98 [00:02<00:00, 37.97it/s, disc_loss=0.37, gen_loss=2.33] Training Epoch 167/501: 100%|██████████| 98/98 [00:02<00:00, 34.90it/s, disc_loss=0.472, gen_loss=1.45] Training Epoch 168/501: 100%|██████████| 98/98 [00:02<00:00, 38.60it/s, disc_loss=0.475, gen_loss=1.12] Training Epoch 169/501: 100%|██████████| 98/98 [00:02<00:00, 35.21it/s, disc_loss=0.384, gen_loss=2.78] Training Epoch 170/501: 100%|██████████| 98/98 [00:02<00:00, 35.15it/s, disc_loss=0.367, gen_loss=2.21] Training Epoch 171/501: 100%|██████████| 98/98 [00:02<00:00, 34.00it/s, disc_loss=0.371, gen_loss=2.46] Training Epoch 172/501: 100%|██████████| 98/98 [00:02<00:00, 34.05it/s, disc_loss=0.367, gen_loss=1.83] Training Epoch 173/501: 100%|██████████| 98/98 [00:02<00:00, 36.22it/s, disc_loss=0.375, gen_loss=2.24] Training Epoch 174/501: 100%|██████████| 98/98 [00:02<00:00, 37.80it/s, disc_loss=0.387, gen_loss=1.91] Training Epoch 175/501: 100%|██████████| 98/98 [00:02<00:00, 35.56it/s, disc_loss=0.433, gen_loss=1.5] Training Epoch 176/501: 100%|██████████| 98/98 [00:02<00:00, 34.97it/s, disc_loss=0.382, gen_loss=1.92] Training Epoch 177/501: 100%|██████████| 98/98 [00:02<00:00, 41.93it/s, disc_loss=0.427, gen_loss=1.35] Training Epoch 178/501: 100%|██████████| 98/98 [00:02<00:00, 35.60it/s, disc_loss=0.395, gen_loss=1.84] Training Epoch 179/501: 100%|██████████| 98/98 [00:02<00:00, 38.25it/s, disc_loss=0.418, gen_loss=2.96] Training Epoch 180/501: 100%|██████████| 98/98 [00:02<00:00, 36.11it/s, disc_loss=0.377, gen_loss=2.76] Training Epoch 181/501: 100%|██████████| 98/98 [00:02<00:00, 36.37it/s, disc_loss=0.36, gen_loss=2.25] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 61.677799224853516, KID: 0.05125243961811066
Training Epoch 182/501: 100%|██████████| 98/98 [00:02<00:00, 37.49it/s, disc_loss=0.401, gen_loss=2.66] Training Epoch 183/501: 100%|██████████| 98/98 [00:02<00:00, 37.72it/s, disc_loss=0.369, gen_loss=2.06] Training Epoch 184/501: 100%|██████████| 98/98 [00:02<00:00, 38.94it/s, disc_loss=0.382, gen_loss=2.41] Training Epoch 185/501: 100%|██████████| 98/98 [00:02<00:00, 38.63it/s, disc_loss=0.374, gen_loss=1.6] Training Epoch 186/501: 100%|██████████| 98/98 [00:02<00:00, 37.95it/s, disc_loss=0.457, gen_loss=2.22] Training Epoch 187/501: 100%|██████████| 98/98 [00:02<00:00, 36.01it/s, disc_loss=0.376, gen_loss=2.25] Training Epoch 188/501: 100%|██████████| 98/98 [00:02<00:00, 35.68it/s, disc_loss=0.401, gen_loss=2.19] Training Epoch 189/501: 100%|██████████| 98/98 [00:02<00:00, 32.85it/s, disc_loss=0.415, gen_loss=2.91] Training Epoch 190/501: 100%|██████████| 98/98 [00:02<00:00, 37.96it/s, disc_loss=0.368, gen_loss=2.32] Training Epoch 191/501: 100%|██████████| 98/98 [00:02<00:00, 36.01it/s, disc_loss=0.375, gen_loss=2.1] Training Epoch 192/501: 100%|██████████| 98/98 [00:02<00:00, 35.32it/s, disc_loss=0.414, gen_loss=1.87] Training Epoch 193/501: 100%|██████████| 98/98 [00:02<00:00, 37.94it/s, disc_loss=0.399, gen_loss=3.16] Training Epoch 194/501: 100%|██████████| 98/98 [00:02<00:00, 42.19it/s, disc_loss=0.377, gen_loss=1.55] Training Epoch 195/501: 100%|██████████| 98/98 [00:02<00:00, 37.47it/s, disc_loss=0.361, gen_loss=2.24] Training Epoch 196/501: 100%|██████████| 98/98 [00:02<00:00, 35.59it/s, disc_loss=0.419, gen_loss=2.87] Training Epoch 197/501: 100%|██████████| 98/98 [00:02<00:00, 35.88it/s, disc_loss=0.363, gen_loss=2.2] Training Epoch 198/501: 100%|██████████| 98/98 [00:02<00:00, 33.02it/s, disc_loss=0.383, gen_loss=2.12] Training Epoch 199/501: 100%|██████████| 98/98 [00:02<00:00, 38.83it/s, disc_loss=0.431, gen_loss=3.27] Training Epoch 200/501: 100%|██████████| 98/98 [00:02<00:00, 36.27it/s, disc_loss=0.362, gen_loss=2.38] Training Epoch 201/501: 100%|██████████| 98/98 [00:02<00:00, 42.95it/s, disc_loss=0.371, gen_loss=2.01] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 45.20331954956055, KID: 0.031952839344739914
Training Epoch 202/501: 100%|██████████| 98/98 [00:02<00:00, 34.62it/s, disc_loss=0.363, gen_loss=2.86] Training Epoch 203/501: 100%|██████████| 98/98 [00:02<00:00, 34.60it/s, disc_loss=0.37, gen_loss=1.86] Training Epoch 204/501: 100%|██████████| 98/98 [00:02<00:00, 34.40it/s, disc_loss=0.383, gen_loss=1.88] Training Epoch 205/501: 100%|██████████| 98/98 [00:02<00:00, 35.56it/s, disc_loss=0.361, gen_loss=2.68] Training Epoch 206/501: 100%|██████████| 98/98 [00:02<00:00, 35.27it/s, disc_loss=0.395, gen_loss=2.16] Training Epoch 207/501: 100%|██████████| 98/98 [00:02<00:00, 36.74it/s, disc_loss=0.394, gen_loss=1.58] Training Epoch 208/501: 100%|██████████| 98/98 [00:02<00:00, 35.07it/s, disc_loss=0.361, gen_loss=1.8] Training Epoch 209/501: 100%|██████████| 98/98 [00:02<00:00, 36.89it/s, disc_loss=0.37, gen_loss=2.52] Training Epoch 210/501: 100%|██████████| 98/98 [00:02<00:00, 35.76it/s, disc_loss=0.394, gen_loss=2.68] Training Epoch 211/501: 100%|██████████| 98/98 [00:02<00:00, 36.71it/s, disc_loss=0.368, gen_loss=1.79] Training Epoch 212/501: 100%|██████████| 98/98 [00:02<00:00, 35.50it/s, disc_loss=0.37, gen_loss=1.66] Training Epoch 213/501: 100%|██████████| 98/98 [00:02<00:00, 32.75it/s, disc_loss=0.381, gen_loss=1.95] Training Epoch 214/501: 100%|██████████| 98/98 [00:02<00:00, 37.33it/s, disc_loss=0.401, gen_loss=2.04] Training Epoch 215/501: 100%|██████████| 98/98 [00:02<00:00, 35.23it/s, disc_loss=0.413, gen_loss=3.36] Training Epoch 216/501: 100%|██████████| 98/98 [00:02<00:00, 35.17it/s, disc_loss=0.389, gen_loss=3.27] Training Epoch 217/501: 100%|██████████| 98/98 [00:02<00:00, 35.69it/s, disc_loss=0.386, gen_loss=1.33] Training Epoch 218/501: 100%|██████████| 98/98 [00:02<00:00, 35.68it/s, disc_loss=0.381, gen_loss=1.67] Training Epoch 219/501: 100%|██████████| 98/98 [00:02<00:00, 36.56it/s, disc_loss=0.365, gen_loss=3.02] Training Epoch 220/501: 100%|██████████| 98/98 [00:02<00:00, 39.60it/s, disc_loss=0.359, gen_loss=2.34] Training Epoch 221/501: 100%|██████████| 98/98 [00:02<00:00, 41.20it/s, disc_loss=0.395, gen_loss=3.05] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 48.94200134277344, KID: 0.038055263459682465
Training Epoch 222/501: 100%|██████████| 98/98 [00:03<00:00, 32.12it/s, disc_loss=0.361, gen_loss=2.62] Training Epoch 223/501: 100%|██████████| 98/98 [00:02<00:00, 39.79it/s, disc_loss=0.439, gen_loss=2.56] Training Epoch 224/501: 100%|██████████| 98/98 [00:02<00:00, 35.94it/s, disc_loss=0.365, gen_loss=2.19] Training Epoch 225/501: 100%|██████████| 98/98 [00:02<00:00, 35.14it/s, disc_loss=0.36, gen_loss=2.24] Training Epoch 226/501: 100%|██████████| 98/98 [00:02<00:00, 35.71it/s, disc_loss=0.367, gen_loss=2.13] Training Epoch 227/501: 100%|██████████| 98/98 [00:02<00:00, 35.87it/s, disc_loss=0.469, gen_loss=1.9] Training Epoch 228/501: 100%|██████████| 98/98 [00:02<00:00, 35.49it/s, disc_loss=0.417, gen_loss=3.53] Training Epoch 229/501: 100%|██████████| 98/98 [00:02<00:00, 36.13it/s, disc_loss=0.363, gen_loss=2.28] Training Epoch 230/501: 100%|██████████| 98/98 [00:02<00:00, 36.95it/s, disc_loss=0.386, gen_loss=2.19] Training Epoch 231/501: 100%|██████████| 98/98 [00:03<00:00, 32.63it/s, disc_loss=0.371, gen_loss=2.07] Training Epoch 232/501: 100%|██████████| 98/98 [00:02<00:00, 35.68it/s, disc_loss=0.384, gen_loss=2.26] Training Epoch 233/501: 100%|██████████| 98/98 [00:02<00:00, 41.02it/s, disc_loss=0.368, gen_loss=1.81] Training Epoch 234/501: 100%|██████████| 98/98 [00:02<00:00, 40.60it/s, disc_loss=0.36, gen_loss=2.24] Training Epoch 235/501: 100%|██████████| 98/98 [00:02<00:00, 44.94it/s, disc_loss=0.37, gen_loss=1.95] Training Epoch 236/501: 100%|██████████| 98/98 [00:02<00:00, 36.94it/s, disc_loss=0.367, gen_loss=2.17] Training Epoch 237/501: 100%|██████████| 98/98 [00:02<00:00, 34.50it/s, disc_loss=0.37, gen_loss=2.77] Training Epoch 238/501: 100%|██████████| 98/98 [00:02<00:00, 38.53it/s, disc_loss=0.38, gen_loss=1.7] Training Epoch 239/501: 100%|██████████| 98/98 [00:02<00:00, 36.51it/s, disc_loss=0.37, gen_loss=2.37] Training Epoch 240/501: 100%|██████████| 98/98 [00:02<00:00, 32.90it/s, disc_loss=0.384, gen_loss=2.99] Training Epoch 241/501: 100%|██████████| 98/98 [00:02<00:00, 36.98it/s, disc_loss=0.55, gen_loss=1.09] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 48.962013244628906, KID: 0.037415146827697754
Training Epoch 242/501: 100%|██████████| 98/98 [00:02<00:00, 40.44it/s, disc_loss=0.386, gen_loss=2.52] Training Epoch 243/501: 100%|██████████| 98/98 [00:02<00:00, 37.03it/s, disc_loss=0.409, gen_loss=3.15] Training Epoch 244/501: 100%|██████████| 98/98 [00:02<00:00, 34.15it/s, disc_loss=0.359, gen_loss=2.5] Training Epoch 245/501: 100%|██████████| 98/98 [00:02<00:00, 34.29it/s, disc_loss=0.373, gen_loss=2.65] Training Epoch 246/501: 100%|██████████| 98/98 [00:02<00:00, 34.68it/s, disc_loss=0.352, gen_loss=2.9] Training Epoch 247/501: 100%|██████████| 98/98 [00:02<00:00, 33.81it/s, disc_loss=0.357, gen_loss=1.96] Training Epoch 248/501: 100%|██████████| 98/98 [00:02<00:00, 34.62it/s, disc_loss=0.357, gen_loss=1.94] Training Epoch 249/501: 100%|██████████| 98/98 [00:03<00:00, 32.07it/s, disc_loss=0.369, gen_loss=1.56] Training Epoch 250/501: 100%|██████████| 98/98 [00:02<00:00, 36.29it/s, disc_loss=0.39, gen_loss=1.97] Training Epoch 251/501: 100%|██████████| 98/98 [00:02<00:00, 34.50it/s, disc_loss=0.367, gen_loss=1.93]
Training Epoch 252/501: 100%|██████████| 98/98 [00:02<00:00, 35.66it/s, disc_loss=0.374, gen_loss=2.16] Training Epoch 253/501: 100%|██████████| 98/98 [00:02<00:00, 34.33it/s, disc_loss=0.391, gen_loss=2.72] Training Epoch 254/501: 100%|██████████| 98/98 [00:02<00:00, 34.17it/s, disc_loss=0.363, gen_loss=2.58] Training Epoch 255/501: 100%|██████████| 98/98 [00:02<00:00, 33.80it/s, disc_loss=0.358, gen_loss=2.36] Training Epoch 256/501: 100%|██████████| 98/98 [00:02<00:00, 43.21it/s, disc_loss=0.356, gen_loss=2.57] Training Epoch 257/501: 100%|██████████| 98/98 [00:02<00:00, 37.34it/s, disc_loss=0.372, gen_loss=1.96] Training Epoch 258/501: 100%|██████████| 98/98 [00:02<00:00, 36.40it/s, disc_loss=0.38, gen_loss=2.29] Training Epoch 259/501: 100%|██████████| 98/98 [00:02<00:00, 37.61it/s, disc_loss=0.361, gen_loss=2.67] Training Epoch 260/501: 100%|██████████| 98/98 [00:02<00:00, 37.61it/s, disc_loss=0.374, gen_loss=1.99] Training Epoch 261/501: 100%|██████████| 98/98 [00:02<00:00, 37.44it/s, disc_loss=0.371, gen_loss=2.81] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 48.291893005371094, KID: 0.0362553633749485
Training Epoch 262/501: 100%|██████████| 98/98 [00:02<00:00, 34.58it/s, disc_loss=0.359, gen_loss=2.45] Training Epoch 263/501: 100%|██████████| 98/98 [00:02<00:00, 34.17it/s, disc_loss=0.377, gen_loss=2.25] Training Epoch 264/501: 100%|██████████| 98/98 [00:03<00:00, 32.21it/s, disc_loss=0.383, gen_loss=1.79] Training Epoch 265/501: 100%|██████████| 98/98 [00:02<00:00, 42.72it/s, disc_loss=0.39, gen_loss=2.12] Training Epoch 266/501: 100%|██████████| 98/98 [00:02<00:00, 45.71it/s, disc_loss=0.382, gen_loss=2.26] Training Epoch 267/501: 100%|██████████| 98/98 [00:02<00:00, 37.11it/s, disc_loss=0.361, gen_loss=2.11] Training Epoch 268/501: 100%|██████████| 98/98 [00:02<00:00, 36.47it/s, disc_loss=0.39, gen_loss=2.47] Training Epoch 269/501: 100%|██████████| 98/98 [00:02<00:00, 36.23it/s, disc_loss=0.382, gen_loss=2.18] Training Epoch 270/501: 100%|██████████| 98/98 [00:02<00:00, 36.97it/s, disc_loss=0.361, gen_loss=2.48] Training Epoch 271/501: 100%|██████████| 98/98 [00:02<00:00, 36.78it/s, disc_loss=0.385, gen_loss=2.81] Training Epoch 272/501: 100%|██████████| 98/98 [00:02<00:00, 35.59it/s, disc_loss=0.396, gen_loss=1.86] Training Epoch 273/501: 100%|██████████| 98/98 [00:02<00:00, 35.17it/s, disc_loss=0.486, gen_loss=1.52] Training Epoch 274/501: 100%|██████████| 98/98 [00:02<00:00, 35.28it/s, disc_loss=0.456, gen_loss=2.25] Training Epoch 275/501: 100%|██████████| 98/98 [00:02<00:00, 35.54it/s, disc_loss=0.36, gen_loss=2.09] Training Epoch 276/501: 100%|██████████| 98/98 [00:02<00:00, 34.67it/s, disc_loss=0.38, gen_loss=2.66] Training Epoch 277/501: 100%|██████████| 98/98 [00:02<00:00, 34.70it/s, disc_loss=0.375, gen_loss=1.04] Training Epoch 278/501: 100%|██████████| 98/98 [00:02<00:00, 35.20it/s, disc_loss=0.368, gen_loss=2.29] Training Epoch 279/501: 100%|██████████| 98/98 [00:02<00:00, 35.37it/s, disc_loss=0.36, gen_loss=2.44] Training Epoch 280/501: 100%|██████████| 98/98 [00:02<00:00, 35.30it/s, disc_loss=0.361, gen_loss=2.12] Training Epoch 281/501: 100%|██████████| 98/98 [00:02<00:00, 32.90it/s, disc_loss=0.375, gen_loss=2.01] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 50.821144104003906, KID: 0.036678437143564224
Training Epoch 282/501: 100%|██████████| 98/98 [00:02<00:00, 40.94it/s, disc_loss=0.37, gen_loss=2.42] Training Epoch 283/501: 100%|██████████| 98/98 [00:02<00:00, 35.08it/s, disc_loss=0.358, gen_loss=2.77] Training Epoch 284/501: 100%|██████████| 98/98 [00:02<00:00, 35.45it/s, disc_loss=0.367, gen_loss=1.68] Training Epoch 285/501: 100%|██████████| 98/98 [00:02<00:00, 35.39it/s, disc_loss=0.363, gen_loss=2.38] Training Epoch 286/501: 100%|██████████| 98/98 [00:02<00:00, 35.03it/s, disc_loss=0.392, gen_loss=2.72] Training Epoch 287/501: 100%|██████████| 98/98 [00:02<00:00, 35.63it/s, disc_loss=0.37, gen_loss=2.4] Training Epoch 288/501: 100%|██████████| 98/98 [00:02<00:00, 34.97it/s, disc_loss=0.373, gen_loss=2.22] Training Epoch 289/501: 100%|██████████| 98/98 [00:02<00:00, 39.66it/s, disc_loss=0.4, gen_loss=1.47] Training Epoch 290/501: 100%|██████████| 98/98 [00:02<00:00, 34.12it/s, disc_loss=0.359, gen_loss=2.11] Training Epoch 291/501: 100%|██████████| 98/98 [00:02<00:00, 36.09it/s, disc_loss=0.371, gen_loss=1.77] Training Epoch 292/501: 100%|██████████| 98/98 [00:02<00:00, 38.81it/s, disc_loss=0.366, gen_loss=2.54] Training Epoch 293/501: 100%|██████████| 98/98 [00:02<00:00, 36.14it/s, disc_loss=0.372, gen_loss=2.19] Training Epoch 294/501: 100%|██████████| 98/98 [00:02<00:00, 34.53it/s, disc_loss=0.371, gen_loss=2.03] Training Epoch 295/501: 100%|██████████| 98/98 [00:02<00:00, 36.47it/s, disc_loss=0.382, gen_loss=2.67] Training Epoch 296/501: 100%|██████████| 98/98 [00:02<00:00, 37.97it/s, disc_loss=0.391, gen_loss=1.72] Training Epoch 297/501: 100%|██████████| 98/98 [00:02<00:00, 36.48it/s, disc_loss=0.375, gen_loss=2.71] Training Epoch 298/501: 100%|██████████| 98/98 [00:02<00:00, 40.02it/s, disc_loss=0.368, gen_loss=2.63] Training Epoch 299/501: 100%|██████████| 98/98 [00:02<00:00, 39.72it/s, disc_loss=0.54, gen_loss=4.47] Training Epoch 300/501: 100%|██████████| 98/98 [00:02<00:00, 34.11it/s, disc_loss=0.39, gen_loss=1.69] Training Epoch 301/501: 100%|██████████| 98/98 [00:02<00:00, 37.53it/s, disc_loss=0.372, gen_loss=2.06] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 51.592079162597656, KID: 0.039605822414159775
Training Epoch 302/501: 100%|██████████| 98/98 [00:02<00:00, 39.98it/s, disc_loss=0.403, gen_loss=2.79] Training Epoch 303/501: 100%|██████████| 98/98 [00:02<00:00, 36.83it/s, disc_loss=0.418, gen_loss=3.02] Training Epoch 304/501: 100%|██████████| 98/98 [00:02<00:00, 36.14it/s, disc_loss=0.373, gen_loss=2.33] Training Epoch 305/501: 100%|██████████| 98/98 [00:02<00:00, 36.68it/s, disc_loss=0.392, gen_loss=2.39] Training Epoch 306/501: 100%|██████████| 98/98 [00:02<00:00, 33.76it/s, disc_loss=0.408, gen_loss=1.48] Training Epoch 307/501: 100%|██████████| 98/98 [00:02<00:00, 35.63it/s, disc_loss=0.419, gen_loss=1.3] Training Epoch 308/501: 100%|██████████| 98/98 [00:02<00:00, 35.87it/s, disc_loss=0.381, gen_loss=3.26] Training Epoch 309/501: 100%|██████████| 98/98 [00:02<00:00, 42.96it/s, disc_loss=0.376, gen_loss=2.39] Training Epoch 310/501: 100%|██████████| 98/98 [00:02<00:00, 36.14it/s, disc_loss=0.371, gen_loss=2.48] Training Epoch 311/501: 100%|██████████| 98/98 [00:02<00:00, 43.36it/s, disc_loss=0.401, gen_loss=3.33] Training Epoch 312/501: 100%|██████████| 98/98 [00:02<00:00, 38.01it/s, disc_loss=0.353, gen_loss=2.69] Training Epoch 313/501: 100%|██████████| 98/98 [00:02<00:00, 41.18it/s, disc_loss=0.367, gen_loss=1.64] Training Epoch 314/501: 100%|██████████| 98/98 [00:02<00:00, 37.15it/s, disc_loss=0.376, gen_loss=2.47] Training Epoch 315/501: 100%|██████████| 98/98 [00:02<00:00, 36.94it/s, disc_loss=0.396, gen_loss=1.66] Training Epoch 316/501: 100%|██████████| 98/98 [00:02<00:00, 32.95it/s, disc_loss=0.361, gen_loss=2.38] Training Epoch 317/501: 100%|██████████| 98/98 [00:02<00:00, 35.62it/s, disc_loss=0.419, gen_loss=3.47] Training Epoch 318/501: 100%|██████████| 98/98 [00:02<00:00, 36.17it/s, disc_loss=0.375, gen_loss=2.31] Training Epoch 319/501: 100%|██████████| 98/98 [00:02<00:00, 38.27it/s, disc_loss=0.361, gen_loss=2.17] Training Epoch 320/501: 100%|██████████| 98/98 [00:02<00:00, 40.04it/s, disc_loss=0.396, gen_loss=1.49] Training Epoch 321/501: 100%|██████████| 98/98 [00:02<00:00, 35.29it/s, disc_loss=0.375, gen_loss=2.49] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 48.89706802368164, KID: 0.03594847023487091
Training Epoch 322/501: 100%|██████████| 98/98 [00:02<00:00, 34.35it/s, disc_loss=0.359, gen_loss=2.89] Training Epoch 323/501: 100%|██████████| 98/98 [00:02<00:00, 36.15it/s, disc_loss=0.378, gen_loss=2.42] Training Epoch 324/501: 100%|██████████| 98/98 [00:02<00:00, 35.96it/s, disc_loss=0.395, gen_loss=3.05] Training Epoch 325/501: 100%|██████████| 98/98 [00:03<00:00, 32.57it/s, disc_loss=0.367, gen_loss=2.77] Training Epoch 326/501: 100%|██████████| 98/98 [00:02<00:00, 35.83it/s, disc_loss=0.367, gen_loss=1.8] Training Epoch 327/501: 100%|██████████| 98/98 [00:02<00:00, 34.72it/s, disc_loss=0.355, gen_loss=2.55] Training Epoch 328/501: 100%|██████████| 98/98 [00:02<00:00, 34.33it/s, disc_loss=0.357, gen_loss=2.69] Training Epoch 329/501: 100%|██████████| 98/98 [00:02<00:00, 33.71it/s, disc_loss=0.353, gen_loss=2.08] Training Epoch 330/501: 100%|██████████| 98/98 [00:02<00:00, 35.22it/s, disc_loss=0.442, gen_loss=3.51] Training Epoch 331/501: 100%|██████████| 98/98 [00:02<00:00, 35.18it/s, disc_loss=0.39, gen_loss=1.98] Training Epoch 332/501: 100%|██████████| 98/98 [00:02<00:00, 35.35it/s, disc_loss=0.368, gen_loss=2.17] Training Epoch 333/501: 100%|██████████| 98/98 [00:02<00:00, 39.95it/s, disc_loss=0.369, gen_loss=2.8] Training Epoch 334/501: 100%|██████████| 98/98 [00:02<00:00, 34.95it/s, disc_loss=0.365, gen_loss=3.15] Training Epoch 335/501: 100%|██████████| 98/98 [00:02<00:00, 38.97it/s, disc_loss=0.361, gen_loss=2.67] Training Epoch 336/501: 100%|██████████| 98/98 [00:02<00:00, 36.00it/s, disc_loss=0.487, gen_loss=2.17] Training Epoch 337/501: 100%|██████████| 98/98 [00:02<00:00, 40.49it/s, disc_loss=0.366, gen_loss=2.49] Training Epoch 338/501: 100%|██████████| 98/98 [00:02<00:00, 36.62it/s, disc_loss=0.378, gen_loss=3.04] Training Epoch 339/501: 100%|██████████| 98/98 [00:02<00:00, 36.17it/s, disc_loss=0.357, gen_loss=2.03] Training Epoch 340/501: 100%|██████████| 98/98 [00:02<00:00, 41.40it/s, disc_loss=0.378, gen_loss=1.67] Training Epoch 341/501: 100%|██████████| 98/98 [00:02<00:00, 36.47it/s, disc_loss=0.371, gen_loss=2.2] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 49.727996826171875, KID: 0.03628916293382645
Training Epoch 342/501: 100%|██████████| 98/98 [00:02<00:00, 35.90it/s, disc_loss=0.376, gen_loss=1.5] Training Epoch 343/501: 100%|██████████| 98/98 [00:02<00:00, 33.87it/s, disc_loss=0.401, gen_loss=1.59] Training Epoch 344/501: 100%|██████████| 98/98 [00:02<00:00, 36.60it/s, disc_loss=0.363, gen_loss=2.63] Training Epoch 345/501: 100%|██████████| 98/98 [00:02<00:00, 36.77it/s, disc_loss=0.355, gen_loss=2.07] Training Epoch 346/501: 100%|██████████| 98/98 [00:02<00:00, 37.84it/s, disc_loss=0.36, gen_loss=2.57] Training Epoch 347/501: 100%|██████████| 98/98 [00:02<00:00, 38.65it/s, disc_loss=0.37, gen_loss=2.17] Training Epoch 348/501: 100%|██████████| 98/98 [00:02<00:00, 39.06it/s, disc_loss=0.391, gen_loss=2.57] Training Epoch 349/501: 100%|██████████| 98/98 [00:02<00:00, 37.73it/s, disc_loss=0.364, gen_loss=2.26] Training Epoch 350/501: 100%|██████████| 98/98 [00:02<00:00, 41.36it/s, disc_loss=0.371, gen_loss=2.09] Training Epoch 351/501: 100%|██████████| 98/98 [00:02<00:00, 33.14it/s, disc_loss=0.387, gen_loss=1.56]
Training Epoch 352/501: 100%|██████████| 98/98 [00:02<00:00, 35.48it/s, disc_loss=0.357, gen_loss=2.15] Training Epoch 353/501: 100%|██████████| 98/98 [00:02<00:00, 35.82it/s, disc_loss=0.367, gen_loss=2.26] Training Epoch 354/501: 100%|██████████| 98/98 [00:02<00:00, 37.93it/s, disc_loss=0.392, gen_loss=1.5] Training Epoch 355/501: 100%|██████████| 98/98 [00:02<00:00, 37.85it/s, disc_loss=0.356, gen_loss=1.97] Training Epoch 356/501: 100%|██████████| 98/98 [00:02<00:00, 34.55it/s, disc_loss=0.375, gen_loss=1.59] Training Epoch 357/501: 100%|██████████| 98/98 [00:02<00:00, 32.71it/s, disc_loss=0.381, gen_loss=2.95] Training Epoch 358/501: 100%|██████████| 98/98 [00:02<00:00, 35.99it/s, disc_loss=0.365, gen_loss=1.97] Training Epoch 359/501: 100%|██████████| 98/98 [00:02<00:00, 35.33it/s, disc_loss=0.366, gen_loss=2.1] Training Epoch 360/501: 100%|██████████| 98/98 [00:02<00:00, 37.66it/s, disc_loss=0.361, gen_loss=2.33] Training Epoch 361/501: 100%|██████████| 98/98 [00:02<00:00, 35.70it/s, disc_loss=0.36, gen_loss=2.6] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 52.250553131103516, KID: 0.034269001334905624
Training Epoch 362/501: 100%|██████████| 98/98 [00:02<00:00, 34.92it/s, disc_loss=0.364, gen_loss=2.05] Training Epoch 363/501: 100%|██████████| 98/98 [00:02<00:00, 34.88it/s, disc_loss=0.364, gen_loss=1.93] Training Epoch 364/501: 100%|██████████| 98/98 [00:02<00:00, 35.75it/s, disc_loss=0.363, gen_loss=2.89] Training Epoch 365/501: 100%|██████████| 98/98 [00:02<00:00, 33.79it/s, disc_loss=0.359, gen_loss=2.58] Training Epoch 366/501: 100%|██████████| 98/98 [00:03<00:00, 31.66it/s, disc_loss=0.367, gen_loss=2.59] Training Epoch 367/501: 100%|██████████| 98/98 [00:02<00:00, 34.30it/s, disc_loss=0.372, gen_loss=1.71] Training Epoch 368/501: 100%|██████████| 98/98 [00:02<00:00, 35.68it/s, disc_loss=0.353, gen_loss=1.96] Training Epoch 369/501: 100%|██████████| 98/98 [00:02<00:00, 35.35it/s, disc_loss=0.37, gen_loss=2.56] Training Epoch 370/501: 100%|██████████| 98/98 [00:02<00:00, 35.10it/s, disc_loss=0.372, gen_loss=1.95] Training Epoch 371/501: 100%|██████████| 98/98 [00:02<00:00, 35.22it/s, disc_loss=0.366, gen_loss=1.97] Training Epoch 372/501: 100%|██████████| 98/98 [00:02<00:00, 35.67it/s, disc_loss=0.391, gen_loss=2.67] Training Epoch 373/501: 100%|██████████| 98/98 [00:02<00:00, 35.61it/s, disc_loss=0.359, gen_loss=2.28] Training Epoch 374/501: 100%|██████████| 98/98 [00:02<00:00, 34.97it/s, disc_loss=0.389, gen_loss=2.81] Training Epoch 375/501: 100%|██████████| 98/98 [00:02<00:00, 34.16it/s, disc_loss=0.379, gen_loss=2.82] Training Epoch 376/501: 100%|██████████| 98/98 [00:02<00:00, 36.34it/s, disc_loss=0.35, gen_loss=2.7] Training Epoch 377/501: 100%|██████████| 98/98 [00:02<00:00, 37.17it/s, disc_loss=0.377, gen_loss=3.03] Training Epoch 378/501: 100%|██████████| 98/98 [00:02<00:00, 36.62it/s, disc_loss=0.357, gen_loss=2.05] Training Epoch 379/501: 100%|██████████| 98/98 [00:02<00:00, 34.04it/s, disc_loss=0.361, gen_loss=2.4] Training Epoch 380/501: 100%|██████████| 98/98 [00:02<00:00, 33.86it/s, disc_loss=0.369, gen_loss=2.58] Training Epoch 381/501: 100%|██████████| 98/98 [00:02<00:00, 34.20it/s, disc_loss=0.365, gen_loss=1.93] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 57.562255859375, KID: 0.04314497858285904
Training Epoch 382/501: 100%|██████████| 98/98 [00:02<00:00, 36.26it/s, disc_loss=0.361, gen_loss=2.19] Training Epoch 383/501: 100%|██████████| 98/98 [00:02<00:00, 37.10it/s, disc_loss=0.362, gen_loss=2.31] Training Epoch 384/501: 100%|██████████| 98/98 [00:02<00:00, 34.95it/s, disc_loss=0.373, gen_loss=2.61] Training Epoch 385/501: 100%|██████████| 98/98 [00:02<00:00, 35.18it/s, disc_loss=0.402, gen_loss=2.58] Training Epoch 386/501: 100%|██████████| 98/98 [00:02<00:00, 40.57it/s, disc_loss=0.362, gen_loss=2.22] Training Epoch 387/501: 100%|██████████| 98/98 [00:02<00:00, 38.88it/s, disc_loss=0.361, gen_loss=2.01] Training Epoch 388/501: 100%|██████████| 98/98 [00:02<00:00, 36.15it/s, disc_loss=0.371, gen_loss=2.01] Training Epoch 389/501: 100%|██████████| 98/98 [00:02<00:00, 36.35it/s, disc_loss=0.368, gen_loss=2.48] Training Epoch 390/501: 100%|██████████| 98/98 [00:02<00:00, 35.60it/s, disc_loss=0.389, gen_loss=2.24] Training Epoch 391/501: 100%|██████████| 98/98 [00:02<00:00, 41.99it/s, disc_loss=0.368, gen_loss=1.88] Training Epoch 392/501: 100%|██████████| 98/98 [00:02<00:00, 41.09it/s, disc_loss=0.362, gen_loss=1.65] Training Epoch 393/501: 100%|██████████| 98/98 [00:02<00:00, 35.09it/s, disc_loss=0.364, gen_loss=2.59] Training Epoch 394/501: 100%|██████████| 98/98 [00:02<00:00, 40.84it/s, disc_loss=0.404, gen_loss=3.51] Training Epoch 395/501: 100%|██████████| 98/98 [00:02<00:00, 35.36it/s, disc_loss=0.425, gen_loss=2.99] Training Epoch 396/501: 100%|██████████| 98/98 [00:02<00:00, 39.11it/s, disc_loss=0.383, gen_loss=2.23] Training Epoch 397/501: 100%|██████████| 98/98 [00:02<00:00, 37.43it/s, disc_loss=0.364, gen_loss=2.02] Training Epoch 398/501: 100%|██████████| 98/98 [00:02<00:00, 37.31it/s, disc_loss=0.363, gen_loss=2.76] Training Epoch 399/501: 100%|██████████| 98/98 [00:02<00:00, 37.09it/s, disc_loss=0.379, gen_loss=1.42] Training Epoch 400/501: 100%|██████████| 98/98 [00:02<00:00, 37.11it/s, disc_loss=0.367, gen_loss=3.39] Training Epoch 401/501: 100%|██████████| 98/98 [00:02<00:00, 37.15it/s, disc_loss=0.368, gen_loss=2.25] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 53.145652770996094, KID: 0.03413400426506996
Training Epoch 402/501: 100%|██████████| 98/98 [00:02<00:00, 37.54it/s, disc_loss=0.359, gen_loss=2.17] Training Epoch 403/501: 100%|██████████| 98/98 [00:02<00:00, 37.12it/s, disc_loss=0.421, gen_loss=1.07] Training Epoch 404/501: 100%|██████████| 98/98 [00:02<00:00, 35.56it/s, disc_loss=0.387, gen_loss=1.57] Training Epoch 405/501: 100%|██████████| 98/98 [00:02<00:00, 35.48it/s, disc_loss=0.374, gen_loss=2.29] Training Epoch 406/501: 100%|██████████| 98/98 [00:02<00:00, 35.74it/s, disc_loss=0.365, gen_loss=1.97] Training Epoch 407/501: 100%|██████████| 98/98 [00:02<00:00, 35.90it/s, disc_loss=0.378, gen_loss=2.5] Training Epoch 408/501: 100%|██████████| 98/98 [00:02<00:00, 37.82it/s, disc_loss=0.365, gen_loss=2.39] Training Epoch 409/501: 100%|██████████| 98/98 [00:02<00:00, 33.77it/s, disc_loss=0.365, gen_loss=2.63] Training Epoch 410/501: 100%|██████████| 98/98 [00:02<00:00, 36.71it/s, disc_loss=0.357, gen_loss=1.99] Training Epoch 411/501: 100%|██████████| 98/98 [00:02<00:00, 37.82it/s, disc_loss=0.374, gen_loss=2.7] Training Epoch 412/501: 100%|██████████| 98/98 [00:02<00:00, 40.85it/s, disc_loss=0.48, gen_loss=1.66] Training Epoch 413/501: 100%|██████████| 98/98 [00:02<00:00, 42.12it/s, disc_loss=0.37, gen_loss=2.33] Training Epoch 414/501: 100%|██████████| 98/98 [00:02<00:00, 35.55it/s, disc_loss=0.362, gen_loss=2.66] Training Epoch 415/501: 100%|██████████| 98/98 [00:02<00:00, 40.23it/s, disc_loss=0.369, gen_loss=1.69] Training Epoch 416/501: 100%|██████████| 98/98 [00:02<00:00, 40.08it/s, disc_loss=0.385, gen_loss=3.21] Training Epoch 417/501: 100%|██████████| 98/98 [00:02<00:00, 36.05it/s, disc_loss=0.359, gen_loss=2.05] Training Epoch 418/501: 100%|██████████| 98/98 [00:02<00:00, 37.14it/s, disc_loss=0.375, gen_loss=2.63] Training Epoch 419/501: 100%|██████████| 98/98 [00:02<00:00, 33.19it/s, disc_loss=0.478, gen_loss=1.32] Training Epoch 420/501: 100%|██████████| 98/98 [00:02<00:00, 37.20it/s, disc_loss=0.368, gen_loss=2] Training Epoch 421/501: 100%|██████████| 98/98 [00:02<00:00, 37.03it/s, disc_loss=0.349, gen_loss=2.53] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 55.108726501464844, KID: 0.03441181406378746
Training Epoch 422/501: 100%|██████████| 98/98 [00:02<00:00, 38.81it/s, disc_loss=0.36, gen_loss=2.74] Training Epoch 423/501: 100%|██████████| 98/98 [00:02<00:00, 39.68it/s, disc_loss=0.359, gen_loss=2.52] Training Epoch 424/501: 100%|██████████| 98/98 [00:02<00:00, 40.29it/s, disc_loss=0.351, gen_loss=2.75] Training Epoch 425/501: 100%|██████████| 98/98 [00:02<00:00, 39.14it/s, disc_loss=0.364, gen_loss=2.3] Training Epoch 426/501: 100%|██████████| 98/98 [00:02<00:00, 37.91it/s, disc_loss=0.368, gen_loss=1.92] Training Epoch 427/501: 100%|██████████| 98/98 [00:02<00:00, 36.56it/s, disc_loss=0.369, gen_loss=2.92] Training Epoch 428/501: 100%|██████████| 98/98 [00:02<00:00, 33.32it/s, disc_loss=0.379, gen_loss=1.8] Training Epoch 429/501: 100%|██████████| 98/98 [00:02<00:00, 34.92it/s, disc_loss=0.361, gen_loss=2.38] Training Epoch 430/501: 100%|██████████| 98/98 [00:02<00:00, 35.31it/s, disc_loss=0.396, gen_loss=1.69] Training Epoch 431/501: 100%|██████████| 98/98 [00:02<00:00, 39.99it/s, disc_loss=0.377, gen_loss=2.87] Training Epoch 432/501: 100%|██████████| 98/98 [00:02<00:00, 35.30it/s, disc_loss=0.379, gen_loss=1.51] Training Epoch 433/501: 100%|██████████| 98/98 [00:02<00:00, 36.33it/s, disc_loss=0.367, gen_loss=1.96] Training Epoch 434/501: 100%|██████████| 98/98 [00:02<00:00, 36.61it/s, disc_loss=0.361, gen_loss=1.85] Training Epoch 435/501: 100%|██████████| 98/98 [00:02<00:00, 36.75it/s, disc_loss=0.362, gen_loss=1.84] Training Epoch 436/501: 100%|██████████| 98/98 [00:02<00:00, 36.27it/s, disc_loss=0.369, gen_loss=1.72] Training Epoch 437/501: 100%|██████████| 98/98 [00:02<00:00, 35.87it/s, disc_loss=0.376, gen_loss=2.5] Training Epoch 438/501: 100%|██████████| 98/98 [00:02<00:00, 38.80it/s, disc_loss=0.394, gen_loss=1.51] Training Epoch 439/501: 100%|██████████| 98/98 [00:02<00:00, 34.99it/s, disc_loss=0.361, gen_loss=2.34] Training Epoch 440/501: 100%|██████████| 98/98 [00:02<00:00, 36.57it/s, disc_loss=0.364, gen_loss=1.94] Training Epoch 441/501: 100%|██████████| 98/98 [00:02<00:00, 38.18it/s, disc_loss=0.401, gen_loss=2.25] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 51.564632415771484, KID: 0.02987150102853775
Training Epoch 442/501: 100%|██████████| 98/98 [00:02<00:00, 35.65it/s, disc_loss=0.358, gen_loss=2.26] Training Epoch 443/501: 100%|██████████| 98/98 [00:02<00:00, 36.06it/s, disc_loss=0.366, gen_loss=2.24] Training Epoch 444/501: 100%|██████████| 98/98 [00:02<00:00, 37.15it/s, disc_loss=0.353, gen_loss=2.46] Training Epoch 445/501: 100%|██████████| 98/98 [00:02<00:00, 32.72it/s, disc_loss=0.392, gen_loss=3.01] Training Epoch 446/501: 100%|██████████| 98/98 [00:02<00:00, 35.73it/s, disc_loss=0.375, gen_loss=2.5] Training Epoch 447/501: 100%|██████████| 98/98 [00:02<00:00, 35.92it/s, disc_loss=0.364, gen_loss=2.87] Training Epoch 448/501: 100%|██████████| 98/98 [00:02<00:00, 36.22it/s, disc_loss=0.363, gen_loss=2.42] Training Epoch 449/501: 100%|██████████| 98/98 [00:02<00:00, 42.15it/s, disc_loss=0.403, gen_loss=3.01] Training Epoch 450/501: 100%|██████████| 98/98 [00:02<00:00, 37.87it/s, disc_loss=0.367, gen_loss=2.26] Training Epoch 451/501: 100%|██████████| 98/98 [00:02<00:00, 35.75it/s, disc_loss=0.357, gen_loss=2.21]
Training Epoch 452/501: 100%|██████████| 98/98 [00:02<00:00, 37.72it/s, disc_loss=0.372, gen_loss=2.46] Training Epoch 453/501: 100%|██████████| 98/98 [00:02<00:00, 42.37it/s, disc_loss=0.378, gen_loss=2.2] Training Epoch 454/501: 100%|██████████| 98/98 [00:02<00:00, 36.87it/s, disc_loss=0.376, gen_loss=2.47] Training Epoch 455/501: 100%|██████████| 98/98 [00:02<00:00, 35.75it/s, disc_loss=0.367, gen_loss=1.78] Training Epoch 456/501: 100%|██████████| 98/98 [00:02<00:00, 36.22it/s, disc_loss=0.372, gen_loss=2.73] Training Epoch 457/501: 100%|██████████| 98/98 [00:02<00:00, 37.52it/s, disc_loss=0.362, gen_loss=2.39] Training Epoch 458/501: 100%|██████████| 98/98 [00:02<00:00, 36.69it/s, disc_loss=0.373, gen_loss=1.84] Training Epoch 459/501: 100%|██████████| 98/98 [00:02<00:00, 36.34it/s, disc_loss=0.366, gen_loss=2.21] Training Epoch 460/501: 100%|██████████| 98/98 [00:02<00:00, 34.16it/s, disc_loss=0.373, gen_loss=2.59] Training Epoch 461/501: 100%|██████████| 98/98 [00:02<00:00, 37.45it/s, disc_loss=0.365, gen_loss=2.22] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 63.933658599853516, KID: 0.04409979656338692
Training Epoch 462/501: 100%|██████████| 98/98 [00:02<00:00, 34.20it/s, disc_loss=0.425, gen_loss=3.25] Training Epoch 463/501: 100%|██████████| 98/98 [00:02<00:00, 33.82it/s, disc_loss=0.367, gen_loss=3.15] Training Epoch 464/501: 100%|██████████| 98/98 [00:02<00:00, 33.83it/s, disc_loss=0.358, gen_loss=2.11] Training Epoch 465/501: 100%|██████████| 98/98 [00:02<00:00, 39.43it/s, disc_loss=0.403, gen_loss=2.91] Training Epoch 466/501: 100%|██████████| 98/98 [00:02<00:00, 35.49it/s, disc_loss=0.367, gen_loss=2.16] Training Epoch 467/501: 100%|██████████| 98/98 [00:02<00:00, 33.59it/s, disc_loss=0.365, gen_loss=2.12] Training Epoch 468/501: 100%|██████████| 98/98 [00:02<00:00, 33.65it/s, disc_loss=0.356, gen_loss=2.44] Training Epoch 469/501: 100%|██████████| 98/98 [00:02<00:00, 32.70it/s, disc_loss=0.351, gen_loss=2.59] Training Epoch 470/501: 100%|██████████| 98/98 [00:02<00:00, 36.41it/s, disc_loss=0.387, gen_loss=1.97] Training Epoch 471/501: 100%|██████████| 98/98 [00:02<00:00, 41.72it/s, disc_loss=0.397, gen_loss=2.87] Training Epoch 472/501: 100%|██████████| 98/98 [00:02<00:00, 36.20it/s, disc_loss=0.377, gen_loss=2.6] Training Epoch 473/501: 100%|██████████| 98/98 [00:02<00:00, 42.10it/s, disc_loss=0.368, gen_loss=2.04] Training Epoch 474/501: 100%|██████████| 98/98 [00:02<00:00, 39.92it/s, disc_loss=0.386, gen_loss=3.22] Training Epoch 475/501: 100%|██████████| 98/98 [00:02<00:00, 34.95it/s, disc_loss=0.366, gen_loss=2.22] Training Epoch 476/501: 100%|██████████| 98/98 [00:02<00:00, 40.97it/s, disc_loss=0.371, gen_loss=1.93] Training Epoch 477/501: 100%|██████████| 98/98 [00:02<00:00, 35.37it/s, disc_loss=0.355, gen_loss=2.19] Training Epoch 478/501: 100%|██████████| 98/98 [00:03<00:00, 32.48it/s, disc_loss=0.369, gen_loss=1.66] Training Epoch 479/501: 100%|██████████| 98/98 [00:02<00:00, 35.99it/s, disc_loss=0.37, gen_loss=2.1] Training Epoch 480/501: 100%|██████████| 98/98 [00:02<00:00, 41.77it/s, disc_loss=0.421, gen_loss=1.28] Training Epoch 481/501: 100%|██████████| 98/98 [00:02<00:00, 35.74it/s, disc_loss=0.384, gen_loss=1.86] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 55.597721099853516, KID: 0.031788088381290436
Training Epoch 482/501: 100%|██████████| 98/98 [00:02<00:00, 34.54it/s, disc_loss=0.421, gen_loss=1.25] Training Epoch 483/501: 100%|██████████| 98/98 [00:02<00:00, 37.24it/s, disc_loss=0.36, gen_loss=2.36] Training Epoch 484/501: 100%|██████████| 98/98 [00:02<00:00, 37.81it/s, disc_loss=0.377, gen_loss=1.87] Training Epoch 485/501: 100%|██████████| 98/98 [00:02<00:00, 36.50it/s, disc_loss=0.376, gen_loss=2.17] Training Epoch 486/501: 100%|██████████| 98/98 [00:02<00:00, 35.00it/s, disc_loss=0.367, gen_loss=1.93] Training Epoch 487/501: 100%|██████████| 98/98 [00:02<00:00, 33.21it/s, disc_loss=0.371, gen_loss=2.78] Training Epoch 488/501: 100%|██████████| 98/98 [00:02<00:00, 35.43it/s, disc_loss=0.372, gen_loss=2.49] Training Epoch 489/501: 100%|██████████| 98/98 [00:02<00:00, 35.24it/s, disc_loss=0.368, gen_loss=1.98] Training Epoch 490/501: 100%|██████████| 98/98 [00:02<00:00, 39.87it/s, disc_loss=0.377, gen_loss=1.88] Training Epoch 491/501: 100%|██████████| 98/98 [00:02<00:00, 34.99it/s, disc_loss=0.364, gen_loss=2.26] Training Epoch 492/501: 100%|██████████| 98/98 [00:02<00:00, 37.46it/s, disc_loss=0.364, gen_loss=2.28] Training Epoch 493/501: 100%|██████████| 98/98 [00:02<00:00, 36.40it/s, disc_loss=0.411, gen_loss=3.59] Training Epoch 494/501: 100%|██████████| 98/98 [00:02<00:00, 40.55it/s, disc_loss=0.35, gen_loss=2.37] Training Epoch 495/501: 100%|██████████| 98/98 [00:02<00:00, 43.14it/s, disc_loss=0.359, gen_loss=2.15] Training Epoch 496/501: 100%|██████████| 98/98 [00:02<00:00, 34.88it/s, disc_loss=0.356, gen_loss=2.16] Training Epoch 497/501: 100%|██████████| 98/98 [00:02<00:00, 40.41it/s, disc_loss=0.43, gen_loss=0.979] Training Epoch 498/501: 100%|██████████| 98/98 [00:02<00:00, 37.77it/s, disc_loss=0.361, gen_loss=2.53] Training Epoch 499/501: 100%|██████████| 98/98 [00:02<00:00, 40.60it/s, disc_loss=0.382, gen_loss=1.68] Training Epoch 500/501: 100%|██████████| 98/98 [00:02<00:00, 42.43it/s, disc_loss=0.362, gen_loss=2.74] Training Epoch 501/501: 100%|██████████| 98/98 [00:02<00:00, 38.17it/s, disc_loss=0.359, gen_loss=2.12] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 59.407806396484375, KID: 0.03649253398180008
<Figure size 1200x600 with 0 Axes>
Observations¶
- After adding regularisation, from the training graph we can observe the generator overpowering the discriminator as the discriminator loss keeps decreasing while the generator plateaus
- Even though FID increases at the beginning, it slowly deteriorates as the discirminator is providing wrong feedback to the generator
- Images seem more realistic but might have some mode collapse towards the end.
Balanced Training¶
To solve the problem of discriminator or generator overpowering each other, we introduced a balancing mechanism that will stop training one of the models if it gets too powerful and allow the other to catch up.
Precision Score
In this mechanism, as our key metric we will be using Precision (or recall depnding on how you see it) to determine if the generator is fooling the discriminator well or the other way around. We chose it over other scores due to its lightweight and simplicity. Precision is defined as
$\text{Precision} = \frac{\text{True Positives}}{\text{True Positives} + \text{False Positives}}$
In our case, the True Positives are images which are real that was classified by the discirminator as real. False positives are fake images that were classified as real. In our training the discriminator will aim to minimise false positives and generator will try to maximise it.
Our Baselines
Precision Score
0 10 20 30 40 50 60 70 80 90 100
|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
Gen Strong ^ Buffer Zone ^ Disc Strong
30 mark 70 mark
We set a arbitrary baseline of 30% and 70%. Once the Precision drops below 30%, it means the generator is overpowering and fooling the discriminator more. If precision is more than 70%, it means the discriminator is overpowering. of course we could use other baselines but we will use this to allow a 40% range where neither overpower the other.
Use of Circular Buffer
Calculating Precision can be done for every batch and decision to traing can be based off of that but we find this wrong. Certain batches are just hard to classify and if the discriminator doesn't perform it might be due to the data rather than the model. To solve this problem, we introduce a circular buffer to store past precision scores. Everytime the model calculates precision, it will take the last $n$ precision scores and average it, where $n$ can be any number but here we will use the batch_size.
I will also give a 20 epoch buffer time for both models to get settled before this mechanism takes effect.
class CircularBuffer:
def __init__(self, size):
self.size = size
self.buffer = [None] * size
self.index = 0
self.full = False
def add(self, value):
self.buffer[self.index] = value
self.index = (self.index + 1) % self.size
if self.index == 0:
self.full = True
def get_values(self):
if self.full:
return self.buffer
else:
return self.buffer[:self.index]
def values(self):
values = self.get_values()
return sum(values) / max(len(values), 1)
class BalancedGAN(R1GAN):
def __init__(self, generator, discriminator, train_loader):
super().__init__(generator, discriminator, train_loader)
self.buffer = CircularBuffer(size=TRAIN_BATCH_SIZE)
def fit(self, epochs, train_loader):
print(f"Training {self.__class__.__name__} for {epochs} Epochs")
self.discriminator.train()
self.generator.train()
for epoch in range(epochs):
disc_losses, gen_losses = [], []
last_disc, last_gen = 0, 0
progress = tqdm(train_loader, desc=f'Training Epoch {epoch + 1}/{epochs}', leave=True, colour="green", dynamic_ncols=True)
for img, label in progress:
precision = self.get_precision(img, label)
self.buffer.add(precision)
if self.buffer.values() <= 0.30 and epoch > 5:
disc_loss = self.disc_step(img, label)
last_disc = disc_loss
disc_losses.append(disc_loss)
gen_losses.append(last_gen)
progress.set_postfix(disc_loss=disc_loss)
elif self.buffer.values() >= 0.70 and epoch > 5:
gen_loss = self.gen_step(img, label)
last_gen = gen_loss
gen_losses.append(gen_loss)
disc_losses.append(last_disc)
progress.set_postfix(gen_loss=gen_loss)
else:
disc_loss = self.disc_step(img, label)
gen_loss = self.gen_step(img, label)
disc_losses.append(disc_loss)
gen_losses.append(gen_loss)
progress.set_postfix(disc_loss=disc_loss, gen_loss=gen_loss)
self.disc_scores.append(np.mean(disc_losses))
self.gen_scores.append(np.mean(gen_losses))
self.on_epoch_end(epoch)
def get_precision(self, img, label):
with torch.no_grad():
img = img.to(device)
label = label.to(device)
fake_label = torch.zeros((img.size()[0], 1), device=device)
real_label = torch.ones((img.size()[0], 1), device=device)
noise = torch.normal(0, 1, (img.size()[0], self.generator.latent_dim), device=device)
fake_imgs = self.generator(noise, label)
fake_pred = self.discriminator(fake_imgs, label)
real_pred = self.discriminator(img, label)
tp_r = ((fake_pred <= 0.5) & (fake_label <= 0.5)).sum().item()
tp_f = ((real_pred <= 0.5) & (real_label <= 0.5)).sum().item()
tp = tp_r + tp_f
fp_r = ((fake_pred >= 0.5) & (fake_label <= 0.5)).sum().item()
fp_f = ((real_pred >= 0.5) & (real_label <= 0.5)).sum().item()
fp = fp_r + fp_f
precision = tp / max((tp + fp), 1e-9) # Avoid division by zero
return precision
gen_simple = ResizeGenerator(128,1024).to(device)
disc_simple = SimpleDiscriminator().to(device)
balancedgan = BalancedGAN(gen_simple,disc_simple,train_loader)
balancedgan.fit(501,train_loader)
plot_losses(501,[(r1gan,"R1 & Smoothing"),(balancedgan,"Balanced")])
balancedgan.save("balanced-500e-v2")
torch.cuda.empty_cache()
Training BalancedGAN for 501 Epochs
Training Epoch 1/501: 100%|██████████| 98/98 [00:03<00:00, 29.93it/s, disc_loss=0.514, gen_loss=1.28]
100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 120.90723419189453, KID: 0.12361392378807068
Training Epoch 2/501: 100%|██████████| 98/98 [00:02<00:00, 34.65it/s, disc_loss=0.574, gen_loss=1.1] Training Epoch 3/501: 100%|██████████| 98/98 [00:03<00:00, 32.61it/s, disc_loss=0.516, gen_loss=1.64] Training Epoch 4/501: 100%|██████████| 98/98 [00:02<00:00, 35.70it/s, disc_loss=0.575, gen_loss=1.67] Training Epoch 5/501: 100%|██████████| 98/98 [00:02<00:00, 33.71it/s, disc_loss=0.566, gen_loss=1.29] Training Epoch 6/501: 100%|██████████| 98/98 [00:02<00:00, 34.56it/s, disc_loss=0.513, gen_loss=0.887] Training Epoch 7/501: 100%|██████████| 98/98 [00:01<00:00, 60.28it/s, disc_loss=0.64, gen_loss=1.35] Training Epoch 8/501: 100%|██████████| 98/98 [00:02<00:00, 34.65it/s, gen_loss=0.976] Training Epoch 9/501: 100%|██████████| 98/98 [00:02<00:00, 41.17it/s, gen_loss=1.15] Training Epoch 10/501: 100%|██████████| 98/98 [00:02<00:00, 40.42it/s, disc_loss=0.573, gen_loss=1.1] Training Epoch 11/501: 100%|██████████| 98/98 [00:02<00:00, 38.45it/s, disc_loss=0.675, gen_loss=1.37] Training Epoch 12/501: 100%|██████████| 98/98 [00:01<00:00, 57.42it/s, gen_loss=0.0489] Training Epoch 13/501: 100%|██████████| 98/98 [00:02<00:00, 44.42it/s, disc_loss=0.462, gen_loss=1.51] Training Epoch 14/501: 100%|██████████| 98/98 [00:02<00:00, 39.14it/s, disc_loss=0.486, gen_loss=1.44] Training Epoch 15/501: 100%|██████████| 98/98 [00:02<00:00, 39.45it/s, gen_loss=1.1] Training Epoch 16/501: 100%|██████████| 98/98 [00:02<00:00, 39.45it/s, disc_loss=0.479, gen_loss=1.73] Training Epoch 17/501: 100%|██████████| 98/98 [00:02<00:00, 45.57it/s, gen_loss=0.07] Training Epoch 18/501: 100%|██████████| 98/98 [00:02<00:00, 45.24it/s, disc_loss=0.585, gen_loss=1.57] Training Epoch 19/501: 100%|██████████| 98/98 [00:02<00:00, 39.77it/s, disc_loss=0.528, gen_loss=1.39] Training Epoch 20/501: 100%|██████████| 98/98 [00:02<00:00, 45.23it/s, disc_loss=0.474, gen_loss=1.56] Training Epoch 21/501: 100%|██████████| 98/98 [00:02<00:00, 41.02it/s, disc_loss=0.427, gen_loss=1.76] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 67.30870056152344, KID: 0.05785881355404854
Training Epoch 22/501: 100%|██████████| 98/98 [00:02<00:00, 48.41it/s, gen_loss=0.15] Training Epoch 23/501: 100%|██████████| 98/98 [00:01<00:00, 54.15it/s, disc_loss=0.484, gen_loss=1.24] Training Epoch 24/501: 100%|██████████| 98/98 [00:02<00:00, 46.80it/s, disc_loss=0.626, gen_loss=1.07] Training Epoch 25/501: 100%|██████████| 98/98 [00:02<00:00, 46.16it/s, disc_loss=0.47, gen_loss=1.5] Training Epoch 26/501: 100%|██████████| 98/98 [00:01<00:00, 49.01it/s, disc_loss=0.742, gen_loss=1.21] Training Epoch 27/501: 100%|██████████| 98/98 [00:02<00:00, 36.88it/s, disc_loss=0.41, gen_loss=1.87] Training Epoch 28/501: 100%|██████████| 98/98 [00:01<00:00, 65.85it/s, disc_loss=0.517, gen_loss=1.54] Training Epoch 29/501: 100%|██████████| 98/98 [00:02<00:00, 44.36it/s, gen_loss=0.872] Training Epoch 30/501: 100%|██████████| 98/98 [00:02<00:00, 43.62it/s, disc_loss=0.447, gen_loss=2.01] Training Epoch 31/501: 100%|██████████| 98/98 [00:02<00:00, 42.83it/s, gen_loss=1.01] Training Epoch 32/501: 100%|██████████| 98/98 [00:02<00:00, 45.02it/s, disc_loss=0.532, gen_loss=1.81] Training Epoch 33/501: 100%|██████████| 98/98 [00:01<00:00, 66.40it/s, gen_loss=0.0482] Training Epoch 34/501: 100%|██████████| 98/98 [00:02<00:00, 44.36it/s, disc_loss=0.491, gen_loss=1.6] Training Epoch 35/501: 100%|██████████| 98/98 [00:01<00:00, 53.90it/s, disc_loss=0.533, gen_loss=1.27] Training Epoch 36/501: 100%|██████████| 98/98 [00:02<00:00, 41.17it/s, gen_loss=1.95] Training Epoch 37/501: 100%|██████████| 98/98 [00:02<00:00, 43.12it/s, disc_loss=0.466, gen_loss=1.53] Training Epoch 38/501: 100%|██████████| 98/98 [00:02<00:00, 48.55it/s, gen_loss=0.0691] Training Epoch 39/501: 100%|██████████| 98/98 [00:02<00:00, 47.48it/s, disc_loss=0.545, gen_loss=1.28] Training Epoch 40/501: 100%|██████████| 98/98 [00:02<00:00, 47.08it/s, disc_loss=0.663, gen_loss=1.97] Training Epoch 41/501: 100%|██████████| 98/98 [00:02<00:00, 40.69it/s, gen_loss=0.847] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 74.90516662597656, KID: 0.06265890598297119
Training Epoch 42/501: 100%|██████████| 98/98 [00:02<00:00, 48.49it/s, disc_loss=0.466, gen_loss=1.32] Training Epoch 43/501: 100%|██████████| 98/98 [00:01<00:00, 49.64it/s, gen_loss=0.131] Training Epoch 44/501: 100%|██████████| 98/98 [00:01<00:00, 53.29it/s, disc_loss=0.414, gen_loss=1.88] Training Epoch 45/501: 100%|██████████| 98/98 [00:02<00:00, 41.25it/s, gen_loss=1.19] Training Epoch 46/501: 100%|██████████| 98/98 [00:02<00:00, 38.82it/s, disc_loss=0.529, gen_loss=1.16] Training Epoch 47/501: 100%|██████████| 98/98 [00:02<00:00, 38.91it/s, gen_loss=1.12] Training Epoch 48/501: 100%|██████████| 98/98 [00:02<00:00, 40.60it/s, disc_loss=0.574, gen_loss=0.876] Training Epoch 49/501: 100%|██████████| 98/98 [00:01<00:00, 61.66it/s, disc_loss=0.508, gen_loss=1.7] Training Epoch 50/501: 100%|██████████| 98/98 [00:02<00:00, 41.28it/s, disc_loss=0.439, gen_loss=1.83] Training Epoch 51/501: 100%|██████████| 98/98 [00:02<00:00, 40.10it/s, gen_loss=1.58]
Training Epoch 52/501: 100%|██████████| 98/98 [00:02<00:00, 42.07it/s, disc_loss=0.447, gen_loss=2] Training Epoch 53/501: 100%|██████████| 98/98 [00:02<00:00, 38.09it/s, disc_loss=0.614, gen_loss=1.8] Training Epoch 54/501: 100%|██████████| 98/98 [00:01<00:00, 66.26it/s, gen_loss=0.0609] Training Epoch 55/501: 100%|██████████| 98/98 [00:02<00:00, 46.08it/s, disc_loss=0.474, gen_loss=1.71] Training Epoch 56/501: 100%|██████████| 98/98 [00:02<00:00, 39.97it/s, disc_loss=0.459, gen_loss=1.93] Training Epoch 57/501: 100%|██████████| 98/98 [00:02<00:00, 41.30it/s, disc_loss=0.41, gen_loss=1.61] Training Epoch 58/501: 100%|██████████| 98/98 [00:02<00:00, 40.66it/s, disc_loss=0.557, gen_loss=1.05] Training Epoch 59/501: 100%|██████████| 98/98 [00:01<00:00, 53.09it/s, gen_loss=0.0869] Training Epoch 60/501: 100%|██████████| 98/98 [00:01<00:00, 49.95it/s, disc_loss=0.871, gen_loss=1.87] Training Epoch 61/501: 100%|██████████| 98/98 [00:02<00:00, 37.11it/s, gen_loss=0.807] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 65.28886413574219, KID: 0.055721815675497055
Training Epoch 62/501: 100%|██████████| 98/98 [00:02<00:00, 48.88it/s, gen_loss=1.74] Training Epoch 63/501: 100%|██████████| 98/98 [00:02<00:00, 45.06it/s, disc_loss=0.443, gen_loss=1.77] Training Epoch 64/501: 100%|██████████| 98/98 [00:01<00:00, 54.67it/s, gen_loss=0.361] Training Epoch 65/501: 100%|██████████| 98/98 [00:01<00:00, 58.34it/s, disc_loss=0.379, gen_loss=1.8] Training Epoch 66/501: 100%|██████████| 98/98 [00:02<00:00, 42.22it/s, disc_loss=0.429, gen_loss=1.62] Training Epoch 67/501: 100%|██████████| 98/98 [00:02<00:00, 41.82it/s, disc_loss=0.495, gen_loss=1.7] Training Epoch 68/501: 100%|██████████| 98/98 [00:02<00:00, 42.20it/s, gen_loss=0.879] Training Epoch 69/501: 100%|██████████| 98/98 [00:02<00:00, 41.95it/s, gen_loss=1.15] Training Epoch 70/501: 100%|██████████| 98/98 [00:01<00:00, 54.79it/s, gen_loss=0.0551] Training Epoch 71/501: 100%|██████████| 98/98 [00:02<00:00, 39.80it/s, disc_loss=0.542, gen_loss=1.25] Training Epoch 72/501: 100%|██████████| 98/98 [00:02<00:00, 41.80it/s, disc_loss=0.437, gen_loss=1.87] Training Epoch 73/501: 100%|██████████| 98/98 [00:02<00:00, 46.39it/s, disc_loss=0.455, gen_loss=1.63] Training Epoch 74/501: 100%|██████████| 98/98 [00:02<00:00, 41.41it/s, disc_loss=0.424, gen_loss=1.35] Training Epoch 75/501: 100%|██████████| 98/98 [00:01<00:00, 54.20it/s, gen_loss=0.055] Training Epoch 76/501: 100%|██████████| 98/98 [00:02<00:00, 45.33it/s, gen_loss=0.656] Training Epoch 77/501: 100%|██████████| 98/98 [00:02<00:00, 43.68it/s, disc_loss=0.445, gen_loss=1.97] Training Epoch 78/501: 100%|██████████| 98/98 [00:01<00:00, 53.36it/s, gen_loss=1.22] Training Epoch 79/501: 100%|██████████| 98/98 [00:02<00:00, 35.90it/s, gen_loss=1.23] Training Epoch 80/501: 100%|██████████| 98/98 [00:01<00:00, 52.52it/s, gen_loss=0.112] Training Epoch 81/501: 100%|██████████| 98/98 [00:01<00:00, 49.02it/s, disc_loss=0.472, gen_loss=1.18] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 53.33794021606445, KID: 0.042434852570295334
Training Epoch 82/501: 100%|██████████| 98/98 [00:02<00:00, 44.15it/s, disc_loss=0.424, gen_loss=1.43] Training Epoch 83/501: 100%|██████████| 98/98 [00:02<00:00, 38.23it/s, disc_loss=0.432, gen_loss=1.61] Training Epoch 84/501: 100%|██████████| 98/98 [00:02<00:00, 44.06it/s, disc_loss=0.407, gen_loss=1.57] Training Epoch 85/501: 100%|██████████| 98/98 [00:02<00:00, 44.74it/s, gen_loss=0.549] Training Epoch 86/501: 100%|██████████| 98/98 [00:01<00:00, 71.17it/s, disc_loss=0.435, gen_loss=1.78] Training Epoch 87/501: 100%|██████████| 98/98 [00:02<00:00, 44.46it/s, gen_loss=0.8] Training Epoch 88/501: 100%|██████████| 98/98 [00:02<00:00, 36.09it/s, gen_loss=1.1] Training Epoch 89/501: 100%|██████████| 98/98 [00:02<00:00, 41.46it/s, disc_loss=0.744, gen_loss=3] Training Epoch 90/501: 100%|██████████| 98/98 [00:02<00:00, 38.33it/s, gen_loss=1.11] Training Epoch 91/501: 100%|██████████| 98/98 [00:01<00:00, 70.87it/s, gen_loss=0.0585] Training Epoch 92/501: 100%|██████████| 98/98 [00:01<00:00, 50.19it/s, gen_loss=1.1] Training Epoch 93/501: 100%|██████████| 98/98 [00:01<00:00, 65.46it/s, gen_loss=1.41] Training Epoch 94/501: 100%|██████████| 98/98 [00:01<00:00, 67.13it/s, gen_loss=0.934] Training Epoch 95/501: 100%|██████████| 98/98 [00:01<00:00, 68.33it/s, gen_loss=0.573] Training Epoch 96/501: 100%|██████████| 98/98 [00:01<00:00, 69.39it/s, gen_loss=0.15] Training Epoch 97/501: 100%|██████████| 98/98 [00:02<00:00, 43.11it/s, gen_loss=1.41] Training Epoch 98/501: 100%|██████████| 98/98 [00:01<00:00, 49.78it/s, disc_loss=0.405, gen_loss=2.85] Training Epoch 99/501: 100%|██████████| 98/98 [00:02<00:00, 35.82it/s, disc_loss=0.372, gen_loss=2.3] Training Epoch 100/501: 100%|██████████| 98/98 [00:02<00:00, 44.71it/s, disc_loss=0.429, gen_loss=1.89] Training Epoch 101/501: 100%|██████████| 98/98 [00:02<00:00, 48.82it/s, gen_loss=0.142] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 86.3828353881836, KID: 0.07647650688886642
Training Epoch 102/501: 100%|██████████| 98/98 [00:02<00:00, 48.68it/s, disc_loss=0.398, gen_loss=2.23] Training Epoch 103/501: 100%|██████████| 98/98 [00:02<00:00, 36.82it/s, disc_loss=0.386, gen_loss=1.63] Training Epoch 104/501: 100%|██████████| 98/98 [00:02<00:00, 33.74it/s, disc_loss=0.388, gen_loss=1.92] Training Epoch 105/501: 100%|██████████| 98/98 [00:01<00:00, 50.06it/s, disc_loss=0.434, gen_loss=1.55] Training Epoch 106/501: 100%|██████████| 98/98 [00:02<00:00, 36.28it/s, gen_loss=0.971] Training Epoch 107/501: 100%|██████████| 98/98 [00:01<00:00, 69.24it/s, disc_loss=0.464, gen_loss=2.99] Training Epoch 108/501: 100%|██████████| 98/98 [00:02<00:00, 37.90it/s, disc_loss=0.419, gen_loss=2.04] Training Epoch 109/501: 100%|██████████| 98/98 [00:02<00:00, 40.41it/s, disc_loss=0.536, gen_loss=2.33] Training Epoch 110/501: 100%|██████████| 98/98 [00:02<00:00, 33.56it/s, gen_loss=1.42] Training Epoch 111/501: 100%|██████████| 98/98 [00:02<00:00, 38.41it/s, disc_loss=0.406, gen_loss=2] Training Epoch 112/501: 100%|██████████| 98/98 [00:01<00:00, 60.87it/s, gen_loss=0.0385] Training Epoch 113/501: 100%|██████████| 98/98 [00:02<00:00, 34.96it/s, disc_loss=0.423, gen_loss=1.69] Training Epoch 114/501: 100%|██████████| 98/98 [00:02<00:00, 41.86it/s, disc_loss=0.454, gen_loss=1.6] Training Epoch 115/501: 100%|██████████| 98/98 [00:02<00:00, 37.26it/s, disc_loss=0.42, gen_loss=2.25] Training Epoch 116/501: 100%|██████████| 98/98 [00:02<00:00, 44.13it/s, gen_loss=0.625] Training Epoch 117/501: 100%|██████████| 98/98 [00:01<00:00, 52.46it/s, gen_loss=0.11] Training Epoch 118/501: 100%|██████████| 98/98 [00:02<00:00, 46.60it/s, disc_loss=0.449, gen_loss=2.5] Training Epoch 119/501: 100%|██████████| 98/98 [00:02<00:00, 38.81it/s, disc_loss=1.08, gen_loss=3.95] Training Epoch 120/501: 100%|██████████| 98/98 [00:02<00:00, 36.57it/s, disc_loss=0.419, gen_loss=1.79] Training Epoch 121/501: 100%|██████████| 98/98 [00:02<00:00, 37.49it/s, disc_loss=0.409, gen_loss=1.87] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 43.93115234375, KID: 0.03848229721188545
Training Epoch 122/501: 100%|██████████| 98/98 [00:02<00:00, 42.04it/s, gen_loss=0.0988] Training Epoch 123/501: 100%|██████████| 98/98 [00:01<00:00, 55.78it/s, disc_loss=0.461, gen_loss=1.83] Training Epoch 124/501: 100%|██████████| 98/98 [00:02<00:00, 34.76it/s, disc_loss=0.45, gen_loss=2.37] Training Epoch 125/501: 100%|██████████| 98/98 [00:02<00:00, 37.05it/s, disc_loss=0.44, gen_loss=1.41] Training Epoch 126/501: 100%|██████████| 98/98 [00:02<00:00, 38.59it/s, disc_loss=0.47, gen_loss=2.1] Training Epoch 127/501: 100%|██████████| 98/98 [00:02<00:00, 38.35it/s, gen_loss=0.603] Training Epoch 128/501: 100%|██████████| 98/98 [00:01<00:00, 62.56it/s, gen_loss=0.123] Training Epoch 129/501: 100%|██████████| 98/98 [00:02<00:00, 37.07it/s, disc_loss=0.43, gen_loss=1.82] Training Epoch 130/501: 100%|██████████| 98/98 [00:02<00:00, 38.69it/s, gen_loss=0.828] Training Epoch 131/501: 100%|██████████| 98/98 [00:02<00:00, 34.14it/s, disc_loss=0.545, gen_loss=0.812] Training Epoch 132/501: 100%|██████████| 98/98 [00:02<00:00, 38.62it/s, gen_loss=0.648] Training Epoch 133/501: 100%|██████████| 98/98 [00:01<00:00, 56.60it/s, gen_loss=0.327] Training Epoch 134/501: 100%|██████████| 98/98 [00:02<00:00, 44.77it/s, disc_loss=0.386, gen_loss=2.38] Training Epoch 135/501: 100%|██████████| 98/98 [00:02<00:00, 40.11it/s, disc_loss=0.455, gen_loss=1.78] Training Epoch 136/501: 100%|██████████| 98/98 [00:02<00:00, 35.09it/s, disc_loss=0.484, gen_loss=1.44] Training Epoch 137/501: 100%|██████████| 98/98 [00:02<00:00, 39.83it/s, disc_loss=0.414, gen_loss=1.68] Training Epoch 138/501: 100%|██████████| 98/98 [00:01<00:00, 58.74it/s, gen_loss=0.15] Training Epoch 139/501: 100%|██████████| 98/98 [00:01<00:00, 59.29it/s, disc_loss=0.452, gen_loss=1.93] Training Epoch 140/501: 100%|██████████| 98/98 [00:02<00:00, 39.69it/s, gen_loss=0.375] Training Epoch 141/501: 100%|██████████| 98/98 [00:02<00:00, 36.01it/s, disc_loss=0.421, gen_loss=1.58] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 43.45783615112305, KID: 0.03471718728542328
Training Epoch 142/501: 100%|██████████| 98/98 [00:02<00:00, 37.47it/s, disc_loss=0.399, gen_loss=2.13] Training Epoch 143/501: 100%|██████████| 98/98 [00:02<00:00, 43.37it/s, gen_loss=0.787] Training Epoch 144/501: 100%|██████████| 98/98 [00:01<00:00, 62.77it/s, gen_loss=0.0811] Training Epoch 145/501: 100%|██████████| 98/98 [00:02<00:00, 40.23it/s, disc_loss=0.424, gen_loss=1.95] Training Epoch 146/501: 100%|██████████| 98/98 [00:02<00:00, 38.45it/s, disc_loss=0.401, gen_loss=1.91] Training Epoch 147/501: 100%|██████████| 98/98 [00:02<00:00, 36.73it/s, gen_loss=0.601] Training Epoch 148/501: 100%|██████████| 98/98 [00:02<00:00, 38.52it/s, disc_loss=0.431, gen_loss=1.94] Training Epoch 149/501: 100%|██████████| 98/98 [00:01<00:00, 55.22it/s, gen_loss=0.128] Training Epoch 150/501: 100%|██████████| 98/98 [00:02<00:00, 45.93it/s, gen_loss=2.4] Training Epoch 151/501: 100%|██████████| 98/98 [00:02<00:00, 37.83it/s, disc_loss=0.476, gen_loss=1.37]
Training Epoch 152/501: 100%|██████████| 98/98 [00:02<00:00, 35.65it/s, disc_loss=0.394, gen_loss=1.64] Training Epoch 153/501: 100%|██████████| 98/98 [00:02<00:00, 40.39it/s, gen_loss=2.04] Training Epoch 154/501: 100%|██████████| 98/98 [00:02<00:00, 46.86it/s, disc_loss=0.414, gen_loss=1.88] Training Epoch 155/501: 100%|██████████| 98/98 [00:01<00:00, 51.07it/s, disc_loss=0.408, gen_loss=1.99] Training Epoch 156/501: 100%|██████████| 98/98 [00:02<00:00, 37.81it/s, disc_loss=0.417, gen_loss=1.68] Training Epoch 157/501: 100%|██████████| 98/98 [00:02<00:00, 33.20it/s, disc_loss=0.427, gen_loss=1.85] Training Epoch 158/501: 100%|██████████| 98/98 [00:02<00:00, 38.68it/s, disc_loss=0.394, gen_loss=2.06] Training Epoch 159/501: 100%|██████████| 98/98 [00:02<00:00, 44.30it/s, gen_loss=0.145] Training Epoch 160/501: 100%|██████████| 98/98 [00:01<00:00, 57.12it/s, disc_loss=0.541, gen_loss=1.91] Training Epoch 161/501: 100%|██████████| 98/98 [00:02<00:00, 35.28it/s, gen_loss=0.439] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 49.67783737182617, KID: 0.03920887038111687
Training Epoch 162/501: 100%|██████████| 98/98 [00:02<00:00, 36.08it/s, disc_loss=0.41, gen_loss=1.93] Training Epoch 163/501: 100%|██████████| 98/98 [00:02<00:00, 37.15it/s, disc_loss=0.451, gen_loss=1.79] Training Epoch 164/501: 100%|██████████| 98/98 [00:02<00:00, 38.65it/s, disc_loss=0.394, gen_loss=2.01] Training Epoch 165/501: 100%|██████████| 98/98 [00:01<00:00, 59.58it/s, gen_loss=0.125] Training Epoch 166/501: 100%|██████████| 98/98 [00:02<00:00, 34.98it/s, disc_loss=0.454, gen_loss=2.34] Training Epoch 167/501: 100%|██████████| 98/98 [00:02<00:00, 38.84it/s, disc_loss=0.49, gen_loss=1.02] Training Epoch 168/501: 100%|██████████| 98/98 [00:02<00:00, 34.60it/s, gen_loss=4.27] Training Epoch 169/501: 100%|██████████| 98/98 [00:02<00:00, 41.14it/s, disc_loss=0.544, gen_loss=0.838] Training Epoch 170/501: 100%|██████████| 98/98 [00:01<00:00, 57.06it/s, gen_loss=0.375] Training Epoch 171/501: 100%|██████████| 98/98 [00:02<00:00, 45.33it/s, disc_loss=0.447, gen_loss=2.54] Training Epoch 172/501: 100%|██████████| 98/98 [00:02<00:00, 37.04it/s, disc_loss=0.387, gen_loss=1.82] Training Epoch 173/501: 100%|██████████| 98/98 [00:02<00:00, 36.38it/s, disc_loss=0.452, gen_loss=2.04] Training Epoch 174/501: 100%|██████████| 98/98 [00:02<00:00, 38.46it/s, gen_loss=0.928] Training Epoch 175/501: 100%|██████████| 98/98 [00:02<00:00, 41.92it/s, disc_loss=1.1, gen_loss=2.43] Training Epoch 176/501: 100%|██████████| 98/98 [00:01<00:00, 50.62it/s, disc_loss=0.43, gen_loss=1.77] Training Epoch 177/501: 100%|██████████| 98/98 [00:02<00:00, 36.71it/s, disc_loss=0.621, gen_loss=0.725] Training Epoch 178/501: 100%|██████████| 98/98 [00:02<00:00, 36.24it/s, disc_loss=0.421, gen_loss=1.57] Training Epoch 179/501: 100%|██████████| 98/98 [00:02<00:00, 41.52it/s, disc_loss=0.437, gen_loss=2.25] Training Epoch 180/501: 100%|██████████| 98/98 [00:02<00:00, 42.83it/s, gen_loss=0.349] Training Epoch 181/501: 100%|██████████| 98/98 [00:01<00:00, 67.90it/s, gen_loss=0.0691] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 64.3747787475586, KID: 0.056929439306259155
Training Epoch 182/501: 100%|██████████| 98/98 [00:02<00:00, 33.08it/s, disc_loss=0.404, gen_loss=2.31] Training Epoch 183/501: 100%|██████████| 98/98 [00:02<00:00, 45.72it/s, disc_loss=0.443, gen_loss=1.77] Training Epoch 184/501: 100%|██████████| 98/98 [00:02<00:00, 41.39it/s, gen_loss=1.41] Training Epoch 185/501: 100%|██████████| 98/98 [00:02<00:00, 42.42it/s, disc_loss=0.609, gen_loss=0.739] Training Epoch 186/501: 100%|██████████| 98/98 [00:01<00:00, 58.82it/s, gen_loss=0.168] Training Epoch 187/501: 100%|██████████| 98/98 [00:02<00:00, 43.28it/s, disc_loss=0.385, gen_loss=1.67] Training Epoch 188/501: 100%|██████████| 98/98 [00:02<00:00, 37.10it/s, gen_loss=1.2] Training Epoch 189/501: 100%|██████████| 98/98 [00:02<00:00, 36.15it/s, gen_loss=0.5] Training Epoch 190/501: 100%|██████████| 98/98 [00:02<00:00, 39.84it/s, gen_loss=0.922] Training Epoch 191/501: 100%|██████████| 98/98 [00:01<00:00, 59.65it/s, gen_loss=0.432] Training Epoch 192/501: 100%|██████████| 98/98 [00:02<00:00, 46.73it/s, disc_loss=0.419, gen_loss=1.44] Training Epoch 193/501: 100%|██████████| 98/98 [00:02<00:00, 35.98it/s, disc_loss=0.438, gen_loss=1.32] Training Epoch 194/501: 100%|██████████| 98/98 [00:03<00:00, 31.65it/s, disc_loss=0.399, gen_loss=2.16] Training Epoch 195/501: 100%|██████████| 98/98 [00:02<00:00, 38.91it/s, disc_loss=0.416, gen_loss=1.91] Training Epoch 196/501: 100%|██████████| 98/98 [00:02<00:00, 46.83it/s, gen_loss=0.12] Training Epoch 197/501: 100%|██████████| 98/98 [00:01<00:00, 54.36it/s, disc_loss=0.4, gen_loss=2.11] Training Epoch 198/501: 100%|██████████| 98/98 [00:02<00:00, 36.32it/s, gen_loss=0.272] Training Epoch 199/501: 100%|██████████| 98/98 [00:02<00:00, 36.36it/s, disc_loss=0.473, gen_loss=1.9] Training Epoch 200/501: 100%|██████████| 98/98 [00:02<00:00, 42.96it/s, disc_loss=0.394, gen_loss=2.11] Training Epoch 201/501: 100%|██████████| 98/98 [00:02<00:00, 41.17it/s, gen_loss=0.979] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 40.704071044921875, KID: 0.029724914580583572
Training Epoch 202/501: 100%|██████████| 98/98 [00:01<00:00, 61.53it/s, gen_loss=0.0722] Training Epoch 203/501: 100%|██████████| 98/98 [00:02<00:00, 36.07it/s, disc_loss=0.427, gen_loss=1.84] Training Epoch 204/501: 100%|██████████| 98/98 [00:02<00:00, 38.00it/s, disc_loss=0.461, gen_loss=1.43] Training Epoch 205/501: 100%|██████████| 98/98 [00:02<00:00, 37.41it/s, disc_loss=0.501, gen_loss=1.74] Training Epoch 206/501: 100%|██████████| 98/98 [00:02<00:00, 43.94it/s, gen_loss=0.585] Training Epoch 207/501: 100%|██████████| 98/98 [00:01<00:00, 53.53it/s, gen_loss=0.24] Training Epoch 208/501: 100%|██████████| 98/98 [00:02<00:00, 42.91it/s, disc_loss=0.409, gen_loss=2.32] Training Epoch 209/501: 100%|██████████| 98/98 [00:02<00:00, 39.59it/s, disc_loss=0.428, gen_loss=1.33] Training Epoch 210/501: 100%|██████████| 98/98 [00:03<00:00, 32.14it/s, gen_loss=0.595] Training Epoch 211/501: 100%|██████████| 98/98 [00:02<00:00, 41.20it/s, gen_loss=2.5] Training Epoch 212/501: 100%|██████████| 98/98 [00:01<00:00, 54.97it/s, disc_loss=0.689, gen_loss=0.69] Training Epoch 213/501: 100%|██████████| 98/98 [00:01<00:00, 51.13it/s, disc_loss=0.411, gen_loss=2.09] Training Epoch 214/501: 100%|██████████| 98/98 [00:02<00:00, 41.67it/s, disc_loss=0.551, gen_loss=2.21] Training Epoch 215/501: 100%|██████████| 98/98 [00:02<00:00, 35.60it/s, disc_loss=0.482, gen_loss=1.94] Training Epoch 216/501: 100%|██████████| 98/98 [00:02<00:00, 39.87it/s, disc_loss=0.399, gen_loss=1.75] Training Epoch 217/501: 100%|██████████| 98/98 [00:02<00:00, 42.66it/s, gen_loss=0.178] Training Epoch 218/501: 100%|██████████| 98/98 [00:01<00:00, 78.73it/s, disc_loss=2.17, gen_loss=3.07] Training Epoch 219/501: 100%|██████████| 98/98 [00:02<00:00, 36.55it/s, gen_loss=1.19] Training Epoch 220/501: 100%|██████████| 98/98 [00:02<00:00, 35.98it/s, disc_loss=0.412, gen_loss=2.2] Training Epoch 221/501: 100%|██████████| 98/98 [00:02<00:00, 39.19it/s, disc_loss=1.13, gen_loss=4.11] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 41.501319885253906, KID: 0.029091592878103256
Training Epoch 222/501: 100%|██████████| 98/98 [00:02<00:00, 43.48it/s, disc_loss=0.42, gen_loss=1.54] Training Epoch 223/501: 100%|██████████| 98/98 [00:01<00:00, 62.40it/s, gen_loss=0.203] Training Epoch 224/501: 100%|██████████| 98/98 [00:02<00:00, 39.65it/s, disc_loss=0.404, gen_loss=1.93] Training Epoch 225/501: 100%|██████████| 98/98 [00:02<00:00, 39.27it/s, disc_loss=0.389, gen_loss=1.8] Training Epoch 226/501: 100%|██████████| 98/98 [00:02<00:00, 36.51it/s, disc_loss=0.398, gen_loss=2.21] Training Epoch 227/501: 100%|██████████| 98/98 [00:02<00:00, 38.68it/s, gen_loss=0.94] Training Epoch 228/501: 100%|██████████| 98/98 [00:01<00:00, 54.54it/s, gen_loss=0.598] Training Epoch 229/501: 100%|██████████| 98/98 [00:02<00:00, 42.80it/s, disc_loss=0.41, gen_loss=2.25] Training Epoch 230/501: 100%|██████████| 98/98 [00:02<00:00, 37.45it/s, disc_loss=0.446, gen_loss=1.83] Training Epoch 231/501: 100%|██████████| 98/98 [00:02<00:00, 34.21it/s, disc_loss=0.41, gen_loss=1.67] Training Epoch 232/501: 100%|██████████| 98/98 [00:02<00:00, 40.16it/s, disc_loss=0.438, gen_loss=2.06] Training Epoch 233/501: 100%|██████████| 98/98 [00:01<00:00, 54.34it/s, gen_loss=0.0826] Training Epoch 234/501: 100%|██████████| 98/98 [00:01<00:00, 58.61it/s, disc_loss=0.449, gen_loss=2.64] Training Epoch 235/501: 100%|██████████| 98/98 [00:02<00:00, 37.66it/s, gen_loss=0.441] Training Epoch 236/501: 100%|██████████| 98/98 [00:02<00:00, 36.91it/s, disc_loss=0.389, gen_loss=2.14] Training Epoch 237/501: 100%|██████████| 98/98 [00:02<00:00, 39.32it/s, disc_loss=0.401, gen_loss=1.92] Training Epoch 238/501: 100%|██████████| 98/98 [00:02<00:00, 40.20it/s, gen_loss=0.932] Training Epoch 239/501: 100%|██████████| 98/98 [00:01<00:00, 66.79it/s, gen_loss=0.243] Training Epoch 240/501: 100%|██████████| 98/98 [00:02<00:00, 39.92it/s, disc_loss=0.444, gen_loss=1.74] Training Epoch 241/501: 100%|██████████| 98/98 [00:02<00:00, 38.28it/s, disc_loss=0.411, gen_loss=2.17] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 46.847599029541016, KID: 0.03470000997185707
Training Epoch 242/501: 100%|██████████| 98/98 [00:02<00:00, 42.76it/s, disc_loss=0.415, gen_loss=1.53] Training Epoch 243/501: 100%|██████████| 98/98 [00:02<00:00, 38.18it/s, gen_loss=1.68] Training Epoch 244/501: 100%|██████████| 98/98 [00:01<00:00, 60.85it/s, gen_loss=0.883] Training Epoch 245/501: 100%|██████████| 98/98 [00:01<00:00, 51.31it/s, disc_loss=0.392, gen_loss=2.02] Training Epoch 246/501: 100%|██████████| 98/98 [00:02<00:00, 38.91it/s, disc_loss=0.425, gen_loss=1.65] Training Epoch 247/501: 100%|██████████| 98/98 [00:03<00:00, 32.51it/s, gen_loss=0.556] Training Epoch 248/501: 100%|██████████| 98/98 [00:02<00:00, 39.71it/s, disc_loss=0.387, gen_loss=2.02] Training Epoch 249/501: 100%|██████████| 98/98 [00:01<00:00, 55.49it/s, disc_loss=0.412, gen_loss=1.59] Training Epoch 250/501: 100%|██████████| 98/98 [00:01<00:00, 55.95it/s, disc_loss=0.41, gen_loss=1.88] Training Epoch 251/501: 100%|██████████| 98/98 [00:02<00:00, 37.74it/s, disc_loss=0.46, gen_loss=2.19]
Training Epoch 252/501: 100%|██████████| 98/98 [00:02<00:00, 32.97it/s, disc_loss=0.407, gen_loss=1.58] Training Epoch 253/501: 100%|██████████| 98/98 [00:02<00:00, 44.16it/s, disc_loss=0.414, gen_loss=1.48] Training Epoch 254/501: 100%|██████████| 98/98 [00:02<00:00, 45.36it/s, gen_loss=0.34] Training Epoch 255/501: 100%|██████████| 98/98 [00:01<00:00, 59.29it/s, gen_loss=0.085] Training Epoch 256/501: 100%|██████████| 98/98 [00:02<00:00, 33.60it/s, gen_loss=1.49] Training Epoch 257/501: 100%|██████████| 98/98 [00:02<00:00, 35.52it/s, disc_loss=0.415, gen_loss=1.92] Training Epoch 258/501: 100%|██████████| 98/98 [00:02<00:00, 37.23it/s, gen_loss=1.61] Training Epoch 259/501: 100%|██████████| 98/98 [00:02<00:00, 37.51it/s, gen_loss=2.59] Training Epoch 260/501: 100%|██████████| 98/98 [00:01<00:00, 62.50it/s, gen_loss=0.355] Training Epoch 261/501: 100%|██████████| 98/98 [00:02<00:00, 43.27it/s, disc_loss=0.379, gen_loss=2.16] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 63.85567855834961, KID: 0.04690435156226158
Training Epoch 262/501: 100%|██████████| 98/98 [00:02<00:00, 36.72it/s, disc_loss=0.462, gen_loss=1.9] Training Epoch 263/501: 100%|██████████| 98/98 [00:03<00:00, 32.52it/s, disc_loss=0.423, gen_loss=2.38] Training Epoch 264/501: 100%|██████████| 98/98 [00:02<00:00, 38.28it/s, disc_loss=0.508, gen_loss=1.11] Training Epoch 265/501: 100%|██████████| 98/98 [00:01<00:00, 60.82it/s, disc_loss=0.43, gen_loss=1.87] Training Epoch 266/501: 100%|██████████| 98/98 [00:01<00:00, 55.11it/s, disc_loss=0.429, gen_loss=1.65] Training Epoch 267/501: 100%|██████████| 98/98 [00:02<00:00, 36.76it/s, disc_loss=0.486, gen_loss=2.46] Training Epoch 268/501: 100%|██████████| 98/98 [00:02<00:00, 35.49it/s, gen_loss=1] Training Epoch 269/501: 100%|██████████| 98/98 [00:02<00:00, 39.10it/s, disc_loss=0.411, gen_loss=1.91] Training Epoch 270/501: 100%|██████████| 98/98 [00:01<00:00, 53.07it/s, gen_loss=0.313] Training Epoch 271/501: 100%|██████████| 98/98 [00:01<00:00, 61.82it/s, gen_loss=0.0908] Training Epoch 272/501: 100%|██████████| 98/98 [00:02<00:00, 35.26it/s, gen_loss=0.294] Training Epoch 273/501: 100%|██████████| 98/98 [00:02<00:00, 35.86it/s, disc_loss=0.46, gen_loss=2.16] Training Epoch 274/501: 100%|██████████| 98/98 [00:02<00:00, 39.92it/s, disc_loss=0.453, gen_loss=1.83] Training Epoch 275/501: 100%|██████████| 98/98 [00:02<00:00, 43.08it/s, gen_loss=0.922] Training Epoch 276/501: 100%|██████████| 98/98 [00:01<00:00, 56.03it/s, gen_loss=0.142] Training Epoch 277/501: 100%|██████████| 98/98 [00:02<00:00, 36.72it/s, disc_loss=0.414, gen_loss=1.83] Training Epoch 278/501: 100%|██████████| 98/98 [00:02<00:00, 40.17it/s, disc_loss=0.518, gen_loss=1.51] Training Epoch 279/501: 100%|██████████| 98/98 [00:02<00:00, 41.00it/s, gen_loss=0.392] Training Epoch 280/501: 100%|██████████| 98/98 [00:02<00:00, 37.07it/s, disc_loss=0.432, gen_loss=1.46] Training Epoch 281/501: 100%|██████████| 98/98 [00:01<00:00, 49.02it/s, gen_loss=0.516] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 66.27986145019531, KID: 0.05117489770054817
Training Epoch 282/501: 100%|██████████| 98/98 [00:01<00:00, 49.91it/s, disc_loss=0.403, gen_loss=2.01] Training Epoch 283/501: 100%|██████████| 98/98 [00:02<00:00, 43.40it/s, disc_loss=0.431, gen_loss=1.89] Training Epoch 284/501: 100%|██████████| 98/98 [00:02<00:00, 35.54it/s, disc_loss=0.421, gen_loss=2.01] Training Epoch 285/501: 100%|██████████| 98/98 [00:02<00:00, 37.94it/s, gen_loss=0.785] Training Epoch 286/501: 100%|██████████| 98/98 [00:01<00:00, 55.50it/s, disc_loss=0.545, gen_loss=1.49] Training Epoch 287/501: 100%|██████████| 98/98 [00:01<00:00, 50.33it/s, disc_loss=0.391, gen_loss=1.89] Training Epoch 288/501: 100%|██████████| 98/98 [00:02<00:00, 38.27it/s, disc_loss=0.435, gen_loss=1.82] Training Epoch 289/501: 100%|██████████| 98/98 [00:02<00:00, 34.98it/s, disc_loss=0.445, gen_loss=2.06] Training Epoch 290/501: 100%|██████████| 98/98 [00:02<00:00, 34.92it/s, disc_loss=0.404, gen_loss=1.83] Training Epoch 291/501: 100%|██████████| 98/98 [00:02<00:00, 48.00it/s, gen_loss=0.28] Training Epoch 292/501: 100%|██████████| 98/98 [00:01<00:00, 57.55it/s, gen_loss=0.0936] Training Epoch 293/501: 100%|██████████| 98/98 [00:02<00:00, 39.02it/s, gen_loss=0.85] Training Epoch 294/501: 100%|██████████| 98/98 [00:02<00:00, 35.98it/s, disc_loss=0.449, gen_loss=1.51] Training Epoch 295/501: 100%|██████████| 98/98 [00:02<00:00, 40.32it/s, disc_loss=0.441, gen_loss=1.92] Training Epoch 296/501: 100%|██████████| 98/98 [00:02<00:00, 38.34it/s, gen_loss=0.652] Training Epoch 297/501: 100%|██████████| 98/98 [00:01<00:00, 58.95it/s, gen_loss=0.266] Training Epoch 298/501: 100%|██████████| 98/98 [00:02<00:00, 43.59it/s, disc_loss=0.431, gen_loss=2.16] Training Epoch 299/501: 100%|██████████| 98/98 [00:02<00:00, 35.63it/s, disc_loss=0.396, gen_loss=2.09] Training Epoch 300/501: 100%|██████████| 98/98 [00:02<00:00, 36.98it/s, disc_loss=0.442, gen_loss=2.27] Training Epoch 301/501: 100%|██████████| 98/98 [00:02<00:00, 40.13it/s, disc_loss=0.424, gen_loss=1.74] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 43.841026306152344, KID: 0.029743852093815804
Training Epoch 302/501: 100%|██████████| 98/98 [00:01<00:00, 54.27it/s, disc_loss=0.513, gen_loss=2.83] Training Epoch 303/501: 100%|██████████| 98/98 [00:01<00:00, 62.79it/s, disc_loss=0.421, gen_loss=2.01] Training Epoch 304/501: 100%|██████████| 98/98 [00:02<00:00, 38.58it/s, disc_loss=0.426, gen_loss=1.53] Training Epoch 305/501: 100%|██████████| 98/98 [00:02<00:00, 33.66it/s, gen_loss=0.349] Training Epoch 306/501: 100%|██████████| 98/98 [00:02<00:00, 35.53it/s, disc_loss=0.422, gen_loss=1.75] Training Epoch 307/501: 100%|██████████| 98/98 [00:01<00:00, 50.85it/s, gen_loss=0.107] Training Epoch 308/501: 100%|██████████| 98/98 [00:01<00:00, 55.69it/s, gen_loss=0.284] Training Epoch 309/501: 100%|██████████| 98/98 [00:02<00:00, 39.72it/s, gen_loss=0.27] Training Epoch 310/501: 100%|██████████| 98/98 [00:02<00:00, 33.96it/s, disc_loss=0.41, gen_loss=1.79] Training Epoch 311/501: 100%|██████████| 98/98 [00:02<00:00, 43.73it/s, disc_loss=0.452, gen_loss=1.58] Training Epoch 312/501: 100%|██████████| 98/98 [00:02<00:00, 40.21it/s, gen_loss=0.803] Training Epoch 313/501: 100%|██████████| 98/98 [00:01<00:00, 57.41it/s, disc_loss=0.604, gen_loss=1.99] Training Epoch 314/501: 100%|██████████| 98/98 [00:02<00:00, 42.49it/s, disc_loss=0.415, gen_loss=1.86] Training Epoch 315/501: 100%|██████████| 98/98 [00:02<00:00, 35.57it/s, disc_loss=0.42, gen_loss=2.39] Training Epoch 316/501: 100%|██████████| 98/98 [00:02<00:00, 40.86it/s, disc_loss=0.522, gen_loss=1.95] Training Epoch 317/501: 100%|██████████| 98/98 [00:02<00:00, 34.34it/s, disc_loss=0.417, gen_loss=2.02] Training Epoch 318/501: 100%|██████████| 98/98 [00:01<00:00, 56.80it/s, gen_loss=0.614] Training Epoch 319/501: 100%|██████████| 98/98 [00:02<00:00, 48.75it/s, disc_loss=0.412, gen_loss=1.98] Training Epoch 320/501: 100%|██████████| 98/98 [00:02<00:00, 36.15it/s, disc_loss=0.386, gen_loss=2.46] Training Epoch 321/501: 100%|██████████| 98/98 [00:02<00:00, 33.44it/s, disc_loss=0.382, gen_loss=1.9] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 52.3414306640625, KID: 0.03563694283366203
Training Epoch 322/501: 100%|██████████| 98/98 [00:02<00:00, 38.65it/s, gen_loss=0.394] Training Epoch 323/501: 100%|██████████| 98/98 [00:02<00:00, 47.43it/s, gen_loss=0.107] Training Epoch 324/501: 100%|██████████| 98/98 [00:01<00:00, 50.04it/s, gen_loss=0.199] Training Epoch 325/501: 100%|██████████| 98/98 [00:02<00:00, 38.25it/s, disc_loss=0.398, gen_loss=2.01] Training Epoch 326/501: 100%|██████████| 98/98 [00:02<00:00, 33.84it/s, gen_loss=1.5] Training Epoch 327/501: 100%|██████████| 98/98 [00:02<00:00, 38.14it/s, disc_loss=0.409, gen_loss=2.34] Training Epoch 328/501: 100%|██████████| 98/98 [00:02<00:00, 47.25it/s, gen_loss=0.197] Training Epoch 329/501: 100%|██████████| 98/98 [00:01<00:00, 54.53it/s, gen_loss=0.563] Training Epoch 330/501: 100%|██████████| 98/98 [00:02<00:00, 39.77it/s, gen_loss=0.597] Training Epoch 331/501: 100%|██████████| 98/98 [00:02<00:00, 35.84it/s, disc_loss=0.399, gen_loss=1.63] Training Epoch 332/501: 100%|██████████| 98/98 [00:02<00:00, 36.08it/s, disc_loss=0.427, gen_loss=1.9] Training Epoch 333/501: 100%|██████████| 98/98 [00:02<00:00, 35.58it/s, gen_loss=0.336] Training Epoch 334/501: 100%|██████████| 98/98 [00:01<00:00, 52.18it/s, gen_loss=0.311] Training Epoch 335/501: 100%|██████████| 98/98 [00:01<00:00, 49.05it/s, disc_loss=0.388, gen_loss=1.89] Training Epoch 336/501: 100%|██████████| 98/98 [00:02<00:00, 36.33it/s, disc_loss=0.381, gen_loss=1.9] Training Epoch 337/501: 100%|██████████| 98/98 [00:02<00:00, 34.44it/s, gen_loss=1.54] Training Epoch 338/501: 100%|██████████| 98/98 [00:02<00:00, 37.60it/s, disc_loss=0.422, gen_loss=1.82] Training Epoch 339/501: 100%|██████████| 98/98 [00:02<00:00, 47.01it/s, disc_loss=0.435, gen_loss=1.84] Training Epoch 340/501: 100%|██████████| 98/98 [00:01<00:00, 52.50it/s, disc_loss=0.478, gen_loss=1.82] Training Epoch 341/501: 100%|██████████| 98/98 [00:02<00:00, 35.33it/s, disc_loss=0.41, gen_loss=1.96] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 53.50605773925781, KID: 0.03410603106021881
Training Epoch 342/501: 100%|██████████| 98/98 [00:02<00:00, 35.08it/s, disc_loss=0.438, gen_loss=2.13] Training Epoch 343/501: 100%|██████████| 98/98 [00:02<00:00, 36.59it/s, disc_loss=0.407, gen_loss=1.79] Training Epoch 344/501: 100%|██████████| 98/98 [00:02<00:00, 48.83it/s, gen_loss=0.11] Training Epoch 345/501: 100%|██████████| 98/98 [00:01<00:00, 52.11it/s, gen_loss=0.32] Training Epoch 346/501: 100%|██████████| 98/98 [00:02<00:00, 42.72it/s, gen_loss=0.208] Training Epoch 347/501: 100%|██████████| 98/98 [00:02<00:00, 35.02it/s, disc_loss=0.404, gen_loss=2.14] Training Epoch 348/501: 100%|██████████| 98/98 [00:02<00:00, 38.00it/s, disc_loss=0.443, gen_loss=1.82] Training Epoch 349/501: 100%|██████████| 98/98 [00:02<00:00, 39.08it/s, gen_loss=1.16] Training Epoch 350/501: 100%|██████████| 98/98 [00:02<00:00, 47.26it/s, gen_loss=0.144] Training Epoch 351/501: 100%|██████████| 98/98 [00:02<00:00, 40.31it/s, disc_loss=0.405, gen_loss=2.15]
Training Epoch 352/501: 100%|██████████| 98/98 [00:02<00:00, 38.75it/s, disc_loss=0.393, gen_loss=1.93] Training Epoch 353/501: 100%|██████████| 98/98 [00:02<00:00, 38.90it/s, gen_loss=0.37] Training Epoch 354/501: 100%|██████████| 98/98 [00:02<00:00, 34.08it/s, gen_loss=1.36] Training Epoch 355/501: 100%|██████████| 98/98 [00:02<00:00, 48.23it/s, gen_loss=1.27] Training Epoch 356/501: 100%|██████████| 98/98 [00:02<00:00, 44.61it/s, disc_loss=0.413, gen_loss=1.77] Training Epoch 357/501: 100%|██████████| 98/98 [00:02<00:00, 40.11it/s, disc_loss=0.408, gen_loss=1.87] Training Epoch 358/501: 100%|██████████| 98/98 [00:02<00:00, 34.49it/s, disc_loss=0.392, gen_loss=1.84] Training Epoch 359/501: 100%|██████████| 98/98 [00:02<00:00, 38.85it/s, gen_loss=0.63] Training Epoch 360/501: 100%|██████████| 98/98 [00:02<00:00, 46.78it/s, gen_loss=0.0933] Training Epoch 361/501: 100%|██████████| 98/98 [00:01<00:00, 51.03it/s, gen_loss=0.194] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 63.84241485595703, KID: 0.04619067534804344
Training Epoch 362/501: 100%|██████████| 98/98 [00:02<00:00, 37.03it/s, disc_loss=0.439, gen_loss=2.34] Training Epoch 363/501: 100%|██████████| 98/98 [00:03<00:00, 32.07it/s, gen_loss=1.48] Training Epoch 364/501: 100%|██████████| 98/98 [00:02<00:00, 36.76it/s, disc_loss=0.417, gen_loss=2.2] Training Epoch 365/501: 100%|██████████| 98/98 [00:02<00:00, 43.02it/s, gen_loss=0.274] Training Epoch 366/501: 100%|██████████| 98/98 [00:02<00:00, 44.82it/s, disc_loss=0.393, gen_loss=1.99] Training Epoch 367/501: 100%|██████████| 98/98 [00:02<00:00, 41.36it/s, gen_loss=1.02] Training Epoch 368/501: 100%|██████████| 98/98 [00:02<00:00, 36.57it/s, disc_loss=0.423, gen_loss=1.86] Training Epoch 369/501: 100%|██████████| 98/98 [00:02<00:00, 41.42it/s, disc_loss=0.393, gen_loss=2.17] Training Epoch 370/501: 100%|██████████| 98/98 [00:02<00:00, 37.81it/s, gen_loss=0.341] Training Epoch 371/501: 100%|██████████| 98/98 [00:01<00:00, 54.24it/s, gen_loss=0.322] Training Epoch 372/501: 100%|██████████| 98/98 [00:02<00:00, 44.05it/s, disc_loss=0.403, gen_loss=1.82] Training Epoch 373/501: 100%|██████████| 98/98 [00:02<00:00, 41.21it/s, disc_loss=0.394, gen_loss=1.82] Training Epoch 374/501: 100%|██████████| 98/98 [00:02<00:00, 36.31it/s, gen_loss=0.881] Training Epoch 375/501: 100%|██████████| 98/98 [00:02<00:00, 34.34it/s, disc_loss=0.412, gen_loss=1.54] Training Epoch 376/501: 100%|██████████| 98/98 [00:01<00:00, 52.63it/s, disc_loss=0.416, gen_loss=1.76] Training Epoch 377/501: 100%|██████████| 98/98 [00:01<00:00, 50.35it/s, disc_loss=0.417, gen_loss=1.92] Training Epoch 378/501: 100%|██████████| 98/98 [00:02<00:00, 45.70it/s, disc_loss=0.378, gen_loss=1.92] Training Epoch 379/501: 100%|██████████| 98/98 [00:02<00:00, 39.07it/s, disc_loss=0.397, gen_loss=1.92] Training Epoch 380/501: 100%|██████████| 98/98 [00:02<00:00, 38.10it/s, disc_loss=0.381, gen_loss=2.11] Training Epoch 381/501: 100%|██████████| 98/98 [00:01<00:00, 52.69it/s, gen_loss=0.176] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 64.13177490234375, KID: 0.04619920626282692
Training Epoch 382/501: 100%|██████████| 98/98 [00:01<00:00, 50.85it/s, gen_loss=0.297] Training Epoch 383/501: 100%|██████████| 98/98 [00:02<00:00, 40.33it/s, gen_loss=0.568] Training Epoch 384/501: 100%|██████████| 98/98 [00:02<00:00, 36.69it/s, disc_loss=0.403, gen_loss=1.78] Training Epoch 385/501: 100%|██████████| 98/98 [00:02<00:00, 39.60it/s, disc_loss=0.413, gen_loss=2.29] Training Epoch 386/501: 100%|██████████| 98/98 [00:02<00:00, 42.86it/s, gen_loss=0.627] Training Epoch 387/501: 100%|██████████| 98/98 [00:01<00:00, 56.17it/s, gen_loss=0.116] Training Epoch 388/501: 100%|██████████| 98/98 [00:02<00:00, 41.59it/s, disc_loss=0.38, gen_loss=2.13] Training Epoch 389/501: 100%|██████████| 98/98 [00:02<00:00, 39.12it/s, disc_loss=0.393, gen_loss=1.53] Training Epoch 390/501: 100%|██████████| 98/98 [00:02<00:00, 38.89it/s, disc_loss=0.438, gen_loss=1.75] Training Epoch 391/501: 100%|██████████| 98/98 [00:02<00:00, 35.22it/s, gen_loss=1.41] Training Epoch 392/501: 100%|██████████| 98/98 [00:01<00:00, 50.38it/s, gen_loss=1.08] Training Epoch 393/501: 100%|██████████| 98/98 [00:01<00:00, 49.98it/s, disc_loss=0.366, gen_loss=2.03] Training Epoch 394/501: 100%|██████████| 98/98 [00:02<00:00, 35.48it/s, disc_loss=0.416, gen_loss=1.78] Training Epoch 395/501: 100%|██████████| 98/98 [00:02<00:00, 35.40it/s, gen_loss=1.68] Training Epoch 396/501: 100%|██████████| 98/98 [00:02<00:00, 44.15it/s, gen_loss=0.556] Training Epoch 397/501: 100%|██████████| 98/98 [00:01<00:00, 49.47it/s, gen_loss=0.114] Training Epoch 398/501: 100%|██████████| 98/98 [00:01<00:00, 53.29it/s, gen_loss=0.489] Training Epoch 399/501: 100%|██████████| 98/98 [00:02<00:00, 47.48it/s, gen_loss=0.271] Training Epoch 400/501: 100%|██████████| 98/98 [00:02<00:00, 35.96it/s, disc_loss=0.372, gen_loss=2.03] Training Epoch 401/501: 100%|██████████| 98/98 [00:02<00:00, 37.52it/s, disc_loss=0.389, gen_loss=2.01] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 52.477420806884766, KID: 0.033259470015764236
Training Epoch 402/501: 100%|██████████| 98/98 [00:01<00:00, 49.63it/s, gen_loss=0.254] Training Epoch 403/501: 100%|██████████| 98/98 [00:01<00:00, 51.26it/s, gen_loss=1.51] Training Epoch 404/501: 100%|██████████| 98/98 [00:02<00:00, 40.72it/s, disc_loss=0.405, gen_loss=2.14] Training Epoch 405/501: 100%|██████████| 98/98 [00:02<00:00, 40.92it/s, disc_loss=0.414, gen_loss=2.09] Training Epoch 406/501: 100%|██████████| 98/98 [00:02<00:00, 37.90it/s, disc_loss=0.387, gen_loss=2.17] Training Epoch 407/501: 100%|██████████| 98/98 [00:02<00:00, 36.47it/s, disc_loss=0.414, gen_loss=2.06] Training Epoch 408/501: 100%|██████████| 98/98 [00:01<00:00, 58.27it/s, gen_loss=0.45] Training Epoch 409/501: 100%|██████████| 98/98 [00:01<00:00, 51.31it/s, disc_loss=0.411, gen_loss=1.86] Training Epoch 410/501: 100%|██████████| 98/98 [00:02<00:00, 34.44it/s, disc_loss=0.385, gen_loss=1.73] Training Epoch 411/501: 100%|██████████| 98/98 [00:02<00:00, 34.66it/s, gen_loss=0.482] Training Epoch 412/501: 100%|██████████| 98/98 [00:02<00:00, 34.13it/s, disc_loss=0.397, gen_loss=2.22] Training Epoch 413/501: 100%|██████████| 98/98 [00:02<00:00, 48.68it/s, disc_loss=0.424, gen_loss=1.9] Training Epoch 414/501: 100%|██████████| 98/98 [00:01<00:00, 49.07it/s, disc_loss=0.413, gen_loss=1.59] Training Epoch 415/501: 100%|██████████| 98/98 [00:02<00:00, 40.16it/s, disc_loss=0.4, gen_loss=2.38] Training Epoch 416/501: 100%|██████████| 98/98 [00:03<00:00, 32.26it/s, disc_loss=0.403, gen_loss=1.89] Training Epoch 417/501: 100%|██████████| 98/98 [00:02<00:00, 34.75it/s, disc_loss=0.396, gen_loss=2.02] Training Epoch 418/501: 100%|██████████| 98/98 [00:02<00:00, 48.25it/s, gen_loss=0.185] Training Epoch 419/501: 100%|██████████| 98/98 [00:01<00:00, 56.97it/s, gen_loss=0.338] Training Epoch 420/501: 100%|██████████| 98/98 [00:02<00:00, 38.65it/s, gen_loss=0.512] Training Epoch 421/501: 100%|██████████| 98/98 [00:02<00:00, 37.90it/s, disc_loss=0.423, gen_loss=1.41] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 46.83354568481445, KID: 0.028255116194486618
Training Epoch 422/501: 100%|██████████| 98/98 [00:02<00:00, 39.98it/s, disc_loss=0.389, gen_loss=2.06] Training Epoch 423/501: 100%|██████████| 98/98 [00:02<00:00, 43.10it/s, gen_loss=0.983] Training Epoch 424/501: 100%|██████████| 98/98 [00:01<00:00, 61.33it/s, gen_loss=0.115] Training Epoch 425/501: 100%|██████████| 98/98 [00:02<00:00, 46.78it/s, disc_loss=0.439, gen_loss=1.15] Training Epoch 426/501: 100%|██████████| 98/98 [00:02<00:00, 39.85it/s, disc_loss=0.384, gen_loss=1.88] Training Epoch 427/501: 100%|██████████| 98/98 [00:02<00:00, 39.29it/s, disc_loss=0.589, gen_loss=2.14] Training Epoch 428/501: 100%|██████████| 98/98 [00:02<00:00, 38.34it/s, gen_loss=0.585] Training Epoch 429/501: 100%|██████████| 98/98 [00:01<00:00, 49.87it/s, gen_loss=1.04] Training Epoch 430/501: 100%|██████████| 98/98 [00:01<00:00, 76.56it/s, disc_loss=0.504, gen_loss=1.79] Training Epoch 431/501: 100%|██████████| 98/98 [00:02<00:00, 40.66it/s, disc_loss=0.37, gen_loss=2.27] Training Epoch 432/501: 100%|██████████| 98/98 [00:02<00:00, 40.56it/s, disc_loss=0.384, gen_loss=1.96] Training Epoch 433/501: 100%|██████████| 98/98 [00:02<00:00, 46.66it/s, gen_loss=0.921] Training Epoch 434/501: 100%|██████████| 98/98 [00:01<00:00, 53.46it/s, gen_loss=0.147] Training Epoch 435/501: 100%|██████████| 98/98 [00:01<00:00, 52.63it/s, gen_loss=0.221] Training Epoch 436/501: 100%|██████████| 98/98 [00:02<00:00, 43.74it/s, gen_loss=0.182] Training Epoch 437/501: 100%|██████████| 98/98 [00:02<00:00, 34.45it/s, disc_loss=0.412, gen_loss=1.91] Training Epoch 438/501: 100%|██████████| 98/98 [00:02<00:00, 34.60it/s, disc_loss=0.393, gen_loss=2.18] Training Epoch 439/501: 100%|██████████| 98/98 [00:02<00:00, 47.56it/s, gen_loss=0.463] Training Epoch 440/501: 100%|██████████| 98/98 [00:01<00:00, 53.63it/s, disc_loss=0.407, gen_loss=1.94] Training Epoch 441/501: 100%|██████████| 98/98 [00:02<00:00, 43.36it/s, gen_loss=1.67] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 55.86204147338867, KID: 0.03456706553697586
Training Epoch 442/501: 100%|██████████| 98/98 [00:02<00:00, 37.58it/s, disc_loss=0.388, gen_loss=2.2] Training Epoch 443/501: 100%|██████████| 98/98 [00:02<00:00, 36.00it/s, disc_loss=0.391, gen_loss=2.09] Training Epoch 444/501: 100%|██████████| 98/98 [00:02<00:00, 37.34it/s, gen_loss=2.44] Training Epoch 445/501: 100%|██████████| 98/98 [00:02<00:00, 47.44it/s, gen_loss=0.382] Training Epoch 446/501: 100%|██████████| 98/98 [00:02<00:00, 48.57it/s, disc_loss=0.409, gen_loss=2.1] Training Epoch 447/501: 100%|██████████| 98/98 [00:02<00:00, 36.77it/s, disc_loss=0.386, gen_loss=1.98] Training Epoch 448/501: 100%|██████████| 98/98 [00:02<00:00, 37.48it/s, gen_loss=0.81] Training Epoch 449/501: 100%|██████████| 98/98 [00:02<00:00, 37.83it/s, disc_loss=0.421, gen_loss=1.82] Training Epoch 450/501: 100%|██████████| 98/98 [00:01<00:00, 49.63it/s, disc_loss=0.442, gen_loss=1.73] Training Epoch 451/501: 100%|██████████| 98/98 [00:01<00:00, 49.34it/s, disc_loss=0.431, gen_loss=1.61]
Training Epoch 452/501: 100%|██████████| 98/98 [00:02<00:00, 40.40it/s, disc_loss=0.376, gen_loss=1.88] Training Epoch 453/501: 100%|██████████| 98/98 [00:03<00:00, 32.42it/s, disc_loss=0.388, gen_loss=2.32] Training Epoch 454/501: 100%|██████████| 98/98 [00:02<00:00, 33.55it/s, disc_loss=0.384, gen_loss=1.87] Training Epoch 455/501: 100%|██████████| 98/98 [00:02<00:00, 46.81it/s, gen_loss=0.156] Training Epoch 456/501: 100%|██████████| 98/98 [00:01<00:00, 51.14it/s, gen_loss=0.769] Training Epoch 457/501: 100%|██████████| 98/98 [00:02<00:00, 41.88it/s, gen_loss=0.291] Training Epoch 458/501: 100%|██████████| 98/98 [00:02<00:00, 34.69it/s, disc_loss=0.446, gen_loss=1.77] Training Epoch 459/501: 100%|██████████| 98/98 [00:02<00:00, 38.71it/s, disc_loss=0.377, gen_loss=1.87] Training Epoch 460/501: 100%|██████████| 98/98 [00:02<00:00, 41.17it/s, gen_loss=1.39] Training Epoch 461/501: 100%|██████████| 98/98 [00:01<00:00, 55.05it/s, gen_loss=0.217] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 50.99947738647461, KID: 0.02537878043949604
Training Epoch 462/501: 100%|██████████| 98/98 [00:02<00:00, 40.05it/s, disc_loss=0.431, gen_loss=1.71] Training Epoch 463/501: 100%|██████████| 98/98 [00:02<00:00, 35.44it/s, disc_loss=0.385, gen_loss=2.22] Training Epoch 464/501: 100%|██████████| 98/98 [00:02<00:00, 34.47it/s, gen_loss=0.351] Training Epoch 465/501: 100%|██████████| 98/98 [00:02<00:00, 34.94it/s, gen_loss=0.202] Training Epoch 466/501: 100%|██████████| 98/98 [00:02<00:00, 42.82it/s, disc_loss=0.393, gen_loss=1.73] Training Epoch 467/501: 100%|██████████| 98/98 [00:02<00:00, 48.69it/s, disc_loss=0.389, gen_loss=2.01] Training Epoch 468/501: 100%|██████████| 98/98 [00:02<00:00, 37.51it/s, disc_loss=0.392, gen_loss=2.17] Training Epoch 469/501: 100%|██████████| 98/98 [00:03<00:00, 31.25it/s, disc_loss=0.362, gen_loss=2.34] Training Epoch 470/501: 100%|██████████| 98/98 [00:02<00:00, 36.56it/s, disc_loss=0.409, gen_loss=1.82] Training Epoch 471/501: 100%|██████████| 98/98 [00:01<00:00, 51.51it/s, gen_loss=0.152] Training Epoch 472/501: 100%|██████████| 98/98 [00:02<00:00, 40.53it/s, gen_loss=0.356] Training Epoch 473/501: 100%|██████████| 98/98 [00:02<00:00, 39.67it/s, gen_loss=0.17] Training Epoch 474/501: 100%|██████████| 98/98 [00:02<00:00, 34.71it/s, disc_loss=0.415, gen_loss=1.43] Training Epoch 475/501: 100%|██████████| 98/98 [00:02<00:00, 34.62it/s, disc_loss=0.39, gen_loss=2.08] Training Epoch 476/501: 100%|██████████| 98/98 [00:02<00:00, 40.01it/s, gen_loss=0.347] Training Epoch 477/501: 100%|██████████| 98/98 [00:01<00:00, 61.26it/s, disc_loss=0.909, gen_loss=2.07] Training Epoch 478/501: 100%|██████████| 98/98 [00:02<00:00, 37.34it/s, disc_loss=0.395, gen_loss=1.86] Training Epoch 479/501: 100%|██████████| 98/98 [00:02<00:00, 40.32it/s, disc_loss=0.415, gen_loss=1.84] Training Epoch 480/501: 100%|██████████| 98/98 [00:02<00:00, 47.26it/s, disc_loss=0.449, gen_loss=2] Training Epoch 481/501: 100%|██████████| 98/98 [00:02<00:00, 36.25it/s, disc_loss=0.387, gen_loss=2.42] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 55.32238006591797, KID: 0.03381608426570892
Training Epoch 482/501: 100%|██████████| 98/98 [00:01<00:00, 66.63it/s, gen_loss=0.606] Training Epoch 483/501: 100%|██████████| 98/98 [00:01<00:00, 64.24it/s, disc_loss=0.387, gen_loss=1.99] Training Epoch 484/501: 100%|██████████| 98/98 [00:02<00:00, 43.85it/s, disc_loss=0.398, gen_loss=2.02] Training Epoch 485/501: 100%|██████████| 98/98 [00:02<00:00, 33.43it/s, gen_loss=1.67] Training Epoch 486/501: 100%|██████████| 98/98 [00:02<00:00, 41.74it/s, gen_loss=2.13] Training Epoch 487/501: 100%|██████████| 98/98 [00:01<00:00, 54.73it/s, disc_loss=0.697, gen_loss=0.767] Training Epoch 488/501: 100%|██████████| 98/98 [00:01<00:00, 52.00it/s, gen_loss=0.229] Training Epoch 489/501: 100%|██████████| 98/98 [00:02<00:00, 36.94it/s, disc_loss=0.391, gen_loss=1.95] Training Epoch 490/501: 100%|██████████| 98/98 [00:02<00:00, 35.47it/s, disc_loss=0.426, gen_loss=1.34] Training Epoch 491/501: 100%|██████████| 98/98 [00:02<00:00, 39.74it/s, disc_loss=0.39, gen_loss=1.9] Training Epoch 492/501: 100%|██████████| 98/98 [00:02<00:00, 48.27it/s, gen_loss=0.179] Training Epoch 493/501: 100%|██████████| 98/98 [00:02<00:00, 48.33it/s, disc_loss=0.392, gen_loss=2.2] Training Epoch 494/501: 100%|██████████| 98/98 [00:02<00:00, 47.11it/s, gen_loss=0.675] Training Epoch 495/501: 100%|██████████| 98/98 [00:02<00:00, 41.79it/s, disc_loss=0.365, gen_loss=2.23] Training Epoch 496/501: 100%|██████████| 98/98 [00:02<00:00, 39.96it/s, disc_loss=0.37, gen_loss=2.34] Training Epoch 497/501: 100%|██████████| 98/98 [00:02<00:00, 39.00it/s, disc_loss=0.382, gen_loss=2.04] Training Epoch 498/501: 100%|██████████| 98/98 [00:01<00:00, 61.12it/s, gen_loss=0.37] Training Epoch 499/501: 100%|██████████| 98/98 [00:02<00:00, 39.41it/s, disc_loss=0.373, gen_loss=2.04] Training Epoch 500/501: 100%|██████████| 98/98 [00:02<00:00, 40.14it/s, disc_loss=0.395, gen_loss=1.99] Training Epoch 501/501: 100%|██████████| 98/98 [00:02<00:00, 34.30it/s, gen_loss=0.988] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 57.4306640625, KID: 0.03858458250761032
<Figure size 1200x600 with 0 Axes>
Observations¶
- Compared to previous models this one is definitely more stable and the FID and KID consistently decreased for the first 200 epochs which is a good sign.
- While the FID decreases constantly to around 40, it doesn begin to deteriorate after 250 epochs. This is a mistake on our end for training it too long where it begins to deteriorate
- Luckily since we have a mechanism to store the best model, we will be able to access the 40 FID models after done training.
- Model seems to be better at drawing trucks and simple objects but struggles with animals.
While in this training, the FID doesn't seem very impressive, in our previous runs it was much lower. This technique also helped stabilise the training of our subsequent models. A recent addition but overall value adder.
Spectral Normalisation¶
Spectral Normalisation is commonly applied to GANs to promote training stability and convergence. Takeru Miyato introduced it in this paper. Intuitively speaking, spectral normalisation prevents the weights from changing too rapidly, thus making it less sensitive.
Theory Spectral Normalisation's success is because it can enforce Lipschitz continuity in the discriminator. Lipschitz continuity is a mathematical property that characterizes how much a function changes in response to a small change in input,

In this graph, we x changed, the increase in the graph is not too drastic and stays within the moving boundaries. This is a Lipschitz continuous function. Spectral Normalisation seeks to enforce this continuity on the discriminator's gradient.
class SpectralDiscriminator(nn.Module):
def __init__(self):
super(SpectralDiscriminator, self).__init__()
self.conv_layers = nn.Sequential(
nn.utils.spectral_norm(nn.Conv2d(CHANNELS, 32, kernel_size=4, stride=2, padding=1)),
nn.BatchNorm2d(32),
nn.LeakyReLU(0.1, inplace=True),
nn.utils.spectral_norm(nn.Conv2d(32, 64, kernel_size=4, stride=2, padding=1)),
nn.BatchNorm2d(64),
nn.LeakyReLU(0.1, inplace=True),
nn.utils.spectral_norm(nn.Conv2d(64, 128, kernel_size=4, stride=2, padding=1)),
nn.BatchNorm2d(128),
nn.LeakyReLU(0.1, inplace=True),
nn.utils.spectral_norm(nn.Conv2d(128, 256, kernel_size=4, stride=2, padding=1)),
nn.BatchNorm2d(256),
nn.LeakyReLU(0.1, inplace=True),
nn.AvgPool2d(2, stride=2)
)
self.output_layers = nn.Sequential(
nn.Linear(256 + NUM_CLASS, 512),
nn.LeakyReLU(0.1, inplace=True),
nn.Linear(512, 1),
nn.Sigmoid()
)
def forward(self, x, labels):
output = self.conv_layers(x).squeeze()
x = torch.cat((output,labels), dim=1)
x = self.output_layers(x)
return x
gen_simple = ResizeGenerator(128,1024).to(device)
disc_spectral = SpectralDiscriminator().to(device)
sngan = BalancedGAN(gen_simple,disc_spectral,train_loader)
sngan.fit(501,train_loader)
plot_losses(501,[(r1gan,"R1 & Smoothing"),
(balancedgan,"Balanced"),(sngan,"Spectral")])
sngan.save("sngan-500e")
torch.cuda.empty_cache()
Training BalancedGAN for 501 Epochs
Training Epoch 1/501: 100%|██████████| 98/98 [00:03<00:00, 25.78it/s, disc_loss=0.496, gen_loss=1.09]
100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 118.38399505615234, KID: 0.10447444766759872
Training Epoch 2/501: 100%|██████████| 98/98 [00:04<00:00, 24.07it/s, disc_loss=0.604, gen_loss=1.16] Training Epoch 3/501: 100%|██████████| 98/98 [00:03<00:00, 24.98it/s, disc_loss=0.546, gen_loss=1.61] Training Epoch 4/501: 100%|██████████| 98/98 [00:03<00:00, 26.13it/s, disc_loss=0.483, gen_loss=1.81] Training Epoch 5/501: 100%|██████████| 98/98 [00:03<00:00, 26.21it/s, disc_loss=0.458, gen_loss=1.76] Training Epoch 6/501: 100%|██████████| 98/98 [00:03<00:00, 27.91it/s, disc_loss=0.481, gen_loss=1.92] Training Epoch 7/501: 100%|██████████| 98/98 [00:01<00:00, 49.21it/s, disc_loss=0.702, gen_loss=1.4] Training Epoch 8/501: 100%|██████████| 98/98 [00:03<00:00, 30.87it/s, disc_loss=0.688, gen_loss=1.72] Training Epoch 9/501: 100%|██████████| 98/98 [00:03<00:00, 26.86it/s, disc_loss=0.518, gen_loss=1.66] Training Epoch 10/501: 100%|██████████| 98/98 [00:03<00:00, 31.43it/s, gen_loss=0.939] Training Epoch 11/501: 100%|██████████| 98/98 [00:02<00:00, 34.25it/s, disc_loss=0.671, gen_loss=1.83] Training Epoch 12/501: 100%|██████████| 98/98 [00:02<00:00, 45.84it/s, gen_loss=0.068] Training Epoch 13/501: 100%|██████████| 98/98 [00:02<00:00, 38.52it/s, disc_loss=0.489, gen_loss=1.7] Training Epoch 14/501: 100%|██████████| 98/98 [00:03<00:00, 28.21it/s, disc_loss=0.521, gen_loss=1.51] Training Epoch 15/501: 100%|██████████| 98/98 [00:02<00:00, 33.44it/s, disc_loss=0.453, gen_loss=1.76] Training Epoch 16/501: 100%|██████████| 98/98 [00:02<00:00, 34.76it/s, disc_loss=0.522, gen_loss=1.25] Training Epoch 17/501: 100%|██████████| 98/98 [00:02<00:00, 35.03it/s, gen_loss=0.0985] Training Epoch 18/501: 100%|██████████| 98/98 [00:02<00:00, 38.94it/s, gen_loss=1.3] Training Epoch 19/501: 100%|██████████| 98/98 [00:03<00:00, 30.47it/s, disc_loss=0.469, gen_loss=1.44] Training Epoch 20/501: 100%|██████████| 98/98 [00:02<00:00, 35.15it/s, gen_loss=1.3] Training Epoch 21/501: 100%|██████████| 98/98 [00:02<00:00, 33.83it/s, disc_loss=0.472, gen_loss=1.72] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 86.5225601196289, KID: 0.08091682195663452
Training Epoch 22/501: 100%|██████████| 98/98 [00:02<00:00, 34.71it/s, gen_loss=0.0938] Training Epoch 23/501: 100%|██████████| 98/98 [00:02<00:00, 45.39it/s, disc_loss=0.463, gen_loss=1.6] Training Epoch 24/501: 100%|██████████| 98/98 [00:03<00:00, 30.29it/s, disc_loss=0.571, gen_loss=1.31] Training Epoch 25/501: 100%|██████████| 98/98 [00:03<00:00, 29.39it/s, disc_loss=0.456, gen_loss=1.5] Training Epoch 26/501: 100%|██████████| 98/98 [00:03<00:00, 27.33it/s, disc_loss=0.483, gen_loss=2.03] Training Epoch 27/501: 100%|██████████| 98/98 [00:03<00:00, 31.56it/s, disc_loss=0.432, gen_loss=1.66] Training Epoch 28/501: 100%|██████████| 98/98 [00:01<00:00, 50.16it/s, gen_loss=0.0452] Training Epoch 29/501: 100%|██████████| 98/98 [00:02<00:00, 35.43it/s, disc_loss=0.429, gen_loss=1.82] Training Epoch 30/501: 100%|██████████| 98/98 [00:02<00:00, 33.29it/s, disc_loss=0.434, gen_loss=1.78] Training Epoch 31/501: 100%|██████████| 98/98 [00:03<00:00, 31.55it/s, disc_loss=0.433, gen_loss=1.62] Training Epoch 32/501: 100%|██████████| 98/98 [00:03<00:00, 31.95it/s, disc_loss=0.492, gen_loss=1.38] Training Epoch 33/501: 100%|██████████| 98/98 [00:02<00:00, 42.43it/s, gen_loss=0.045] Training Epoch 34/501: 100%|██████████| 98/98 [00:02<00:00, 35.17it/s, disc_loss=0.488, gen_loss=1.94] Training Epoch 35/501: 100%|██████████| 98/98 [00:03<00:00, 31.39it/s, disc_loss=0.759, gen_loss=1.84] Training Epoch 36/501: 100%|██████████| 98/98 [00:03<00:00, 28.86it/s, disc_loss=0.432, gen_loss=1.93] Training Epoch 37/501: 100%|██████████| 98/98 [00:02<00:00, 33.47it/s, gen_loss=1.7] Training Epoch 38/501: 100%|██████████| 98/98 [00:02<00:00, 36.09it/s, gen_loss=0.0952] Training Epoch 39/501: 100%|██████████| 98/98 [00:02<00:00, 39.75it/s, disc_loss=0.508, gen_loss=1.33] Training Epoch 40/501: 100%|██████████| 98/98 [00:02<00:00, 33.57it/s, disc_loss=0.431, gen_loss=1.4] Training Epoch 41/501: 100%|██████████| 98/98 [00:03<00:00, 31.13it/s, gen_loss=1.2] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 52.430625915527344, KID: 0.0473259873688221
Training Epoch 42/501: 100%|██████████| 98/98 [00:03<00:00, 31.83it/s, disc_loss=0.571, gen_loss=2.62] Training Epoch 43/501: 100%|██████████| 98/98 [00:03<00:00, 29.80it/s, gen_loss=0.314] Training Epoch 44/501: 100%|██████████| 98/98 [00:02<00:00, 43.17it/s, disc_loss=0.418, gen_loss=1.84] Training Epoch 45/501: 100%|██████████| 98/98 [00:03<00:00, 31.95it/s, gen_loss=1.68] Training Epoch 46/501: 100%|██████████| 98/98 [00:03<00:00, 31.53it/s, gen_loss=0.502] Training Epoch 47/501: 100%|██████████| 98/98 [00:03<00:00, 29.61it/s, disc_loss=0.532, gen_loss=1.23] Training Epoch 48/501: 100%|██████████| 98/98 [00:03<00:00, 30.23it/s, gen_loss=1.05] Training Epoch 49/501: 100%|██████████| 98/98 [00:02<00:00, 46.53it/s, gen_loss=0.0376] Training Epoch 50/501: 100%|██████████| 98/98 [00:03<00:00, 32.09it/s, disc_loss=0.431, gen_loss=1.64] Training Epoch 51/501: 100%|██████████| 98/98 [00:03<00:00, 30.46it/s, disc_loss=0.464, gen_loss=1.85]
Training Epoch 52/501: 100%|██████████| 98/98 [00:03<00:00, 30.77it/s, disc_loss=0.605, gen_loss=1.66] Training Epoch 53/501: 100%|██████████| 98/98 [00:03<00:00, 30.07it/s, disc_loss=0.456, gen_loss=1.89] Training Epoch 54/501: 100%|██████████| 98/98 [00:02<00:00, 38.52it/s, gen_loss=0.0486] Training Epoch 55/501: 100%|██████████| 98/98 [00:02<00:00, 35.28it/s, disc_loss=0.439, gen_loss=2.23] Training Epoch 56/501: 100%|██████████| 98/98 [00:03<00:00, 30.04it/s, gen_loss=1.05] Training Epoch 57/501: 100%|██████████| 98/98 [00:03<00:00, 32.53it/s, disc_loss=0.407, gen_loss=1.88] Training Epoch 58/501: 100%|██████████| 98/98 [00:03<00:00, 30.90it/s, disc_loss=0.488, gen_loss=1.62] Training Epoch 59/501: 100%|██████████| 98/98 [00:02<00:00, 34.02it/s, gen_loss=0.0854] Training Epoch 60/501: 100%|██████████| 98/98 [00:02<00:00, 37.59it/s, gen_loss=2.07] Training Epoch 61/501: 100%|██████████| 98/98 [00:02<00:00, 33.49it/s, disc_loss=0.38, gen_loss=1.97] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 83.45834350585938, KID: 0.07279352843761444
Training Epoch 62/501: 100%|██████████| 98/98 [00:02<00:00, 38.22it/s, disc_loss=0.408, gen_loss=1.66] Training Epoch 63/501: 100%|██████████| 98/98 [00:03<00:00, 31.86it/s, disc_loss=0.418, gen_loss=2.22] Training Epoch 64/501: 100%|██████████| 98/98 [00:02<00:00, 33.48it/s, gen_loss=0.453] Training Epoch 65/501: 100%|██████████| 98/98 [00:02<00:00, 48.50it/s, gen_loss=1.65] Training Epoch 66/501: 100%|██████████| 98/98 [00:03<00:00, 29.67it/s, disc_loss=0.391, gen_loss=1.95] Training Epoch 67/501: 100%|██████████| 98/98 [00:03<00:00, 31.87it/s, gen_loss=1.74] Training Epoch 68/501: 100%|██████████| 98/98 [00:03<00:00, 30.64it/s, gen_loss=0.78] Training Epoch 69/501: 100%|██████████| 98/98 [00:03<00:00, 30.86it/s, disc_loss=0.397, gen_loss=1.99] Training Epoch 70/501: 100%|██████████| 98/98 [00:02<00:00, 43.62it/s, gen_loss=0.0653] Training Epoch 71/501: 100%|██████████| 98/98 [00:03<00:00, 32.20it/s, disc_loss=0.456, gen_loss=1.68] Training Epoch 72/501: 100%|██████████| 98/98 [00:03<00:00, 30.50it/s, disc_loss=0.544, gen_loss=2.31] Training Epoch 73/501: 100%|██████████| 98/98 [00:02<00:00, 33.77it/s, gen_loss=2.16] Training Epoch 74/501: 100%|██████████| 98/98 [00:02<00:00, 33.11it/s, gen_loss=0.614] Training Epoch 75/501: 100%|██████████| 98/98 [00:02<00:00, 37.65it/s, gen_loss=0.0325] Training Epoch 76/501: 100%|██████████| 98/98 [00:02<00:00, 36.05it/s, disc_loss=0.71, gen_loss=1.7] Training Epoch 77/501: 100%|██████████| 98/98 [00:03<00:00, 28.37it/s, disc_loss=0.437, gen_loss=1.83] Training Epoch 78/501: 100%|██████████| 98/98 [00:02<00:00, 33.04it/s, disc_loss=0.426, gen_loss=1.59] Training Epoch 79/501: 100%|██████████| 98/98 [00:03<00:00, 27.60it/s, disc_loss=0.414, gen_loss=1.88] Training Epoch 80/501: 100%|██████████| 98/98 [00:03<00:00, 32.32it/s, gen_loss=0.122] Training Epoch 81/501: 100%|██████████| 98/98 [00:02<00:00, 41.05it/s, disc_loss=0.358, gen_loss=2.37] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 112.64698028564453, KID: 0.11700954288244247
Training Epoch 82/501: 100%|██████████| 98/98 [00:03<00:00, 30.57it/s, gen_loss=0.494] Training Epoch 83/501: 100%|██████████| 98/98 [00:03<00:00, 28.57it/s, disc_loss=0.405, gen_loss=1.78] Training Epoch 84/501: 100%|██████████| 98/98 [00:03<00:00, 31.58it/s, gen_loss=1.81] Training Epoch 85/501: 100%|██████████| 98/98 [00:03<00:00, 29.60it/s, disc_loss=0.396, gen_loss=1.6] Training Epoch 86/501: 100%|██████████| 98/98 [00:02<00:00, 45.79it/s, gen_loss=0.0281] Training Epoch 87/501: 100%|██████████| 98/98 [00:03<00:00, 26.46it/s, disc_loss=0.456, gen_loss=1.31] Training Epoch 88/501: 100%|██████████| 98/98 [00:03<00:00, 28.91it/s, gen_loss=1.35] Training Epoch 89/501: 100%|██████████| 98/98 [00:03<00:00, 27.78it/s, disc_loss=0.438, gen_loss=2.11] Training Epoch 90/501: 100%|██████████| 98/98 [00:02<00:00, 38.88it/s, disc_loss=0.415, gen_loss=1.81] Training Epoch 91/501: 100%|██████████| 98/98 [00:02<00:00, 43.44it/s, gen_loss=0.0462] Training Epoch 92/501: 100%|██████████| 98/98 [00:03<00:00, 31.50it/s, disc_loss=0.413, gen_loss=1.75] Training Epoch 93/501: 100%|██████████| 98/98 [00:03<00:00, 28.11it/s, disc_loss=0.415, gen_loss=1.87] Training Epoch 94/501: 100%|██████████| 98/98 [00:03<00:00, 27.75it/s, disc_loss=0.41, gen_loss=1.98] Training Epoch 95/501: 100%|██████████| 98/98 [00:03<00:00, 26.43it/s, gen_loss=1.08] Training Epoch 96/501: 100%|██████████| 98/98 [00:02<00:00, 36.00it/s, gen_loss=0.057] Training Epoch 97/501: 100%|██████████| 98/98 [00:02<00:00, 33.91it/s, gen_loss=1.34] Training Epoch 98/501: 100%|██████████| 98/98 [00:03<00:00, 31.12it/s, disc_loss=0.402, gen_loss=1.86] Training Epoch 99/501: 100%|██████████| 98/98 [00:03<00:00, 31.52it/s, disc_loss=0.89, gen_loss=2] Training Epoch 100/501: 100%|██████████| 98/98 [00:03<00:00, 28.57it/s, disc_loss=0.391, gen_loss=1.7] Training Epoch 101/501: 100%|██████████| 98/98 [00:03<00:00, 32.33it/s, gen_loss=0.184] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 60.21035385131836, KID: 0.05029847472906113
Training Epoch 102/501: 100%|██████████| 98/98 [00:02<00:00, 47.49it/s, disc_loss=0.418, gen_loss=2.07] Training Epoch 103/501: 100%|██████████| 98/98 [00:03<00:00, 29.15it/s, gen_loss=0.591] Training Epoch 104/501: 100%|██████████| 98/98 [00:03<00:00, 30.36it/s, gen_loss=0.917] Training Epoch 105/501: 100%|██████████| 98/98 [00:03<00:00, 30.84it/s, gen_loss=1.13] Training Epoch 106/501: 100%|██████████| 98/98 [00:03<00:00, 29.35it/s, disc_loss=0.427, gen_loss=1.7] Training Epoch 107/501: 100%|██████████| 98/98 [00:02<00:00, 48.46it/s, gen_loss=0.0335] Training Epoch 108/501: 100%|██████████| 98/98 [00:03<00:00, 29.53it/s, disc_loss=0.557, gen_loss=1.41] Training Epoch 109/501: 100%|██████████| 98/98 [00:03<00:00, 29.99it/s, gen_loss=2] Training Epoch 110/501: 100%|██████████| 98/98 [00:03<00:00, 28.70it/s, disc_loss=0.439, gen_loss=1.25] Training Epoch 111/501: 100%|██████████| 98/98 [00:03<00:00, 32.64it/s, disc_loss=0.435, gen_loss=1.8] Training Epoch 112/501: 100%|██████████| 98/98 [00:02<00:00, 40.08it/s, gen_loss=0.0334] Training Epoch 113/501: 100%|██████████| 98/98 [00:02<00:00, 37.63it/s, disc_loss=0.416, gen_loss=1.59] Training Epoch 114/501: 100%|██████████| 98/98 [00:03<00:00, 30.04it/s, disc_loss=0.418, gen_loss=1.55] Training Epoch 115/501: 100%|██████████| 98/98 [00:03<00:00, 29.22it/s, disc_loss=0.425, gen_loss=2.15] Training Epoch 116/501: 100%|██████████| 98/98 [00:03<00:00, 28.09it/s, disc_loss=0.422, gen_loss=2.53] Training Epoch 117/501: 100%|██████████| 98/98 [00:02<00:00, 37.77it/s, gen_loss=0.0767] Training Epoch 118/501: 100%|██████████| 98/98 [00:02<00:00, 36.63it/s, disc_loss=0.419, gen_loss=2.04] Training Epoch 119/501: 100%|██████████| 98/98 [00:03<00:00, 28.50it/s, disc_loss=0.423, gen_loss=2.32] Training Epoch 120/501: 100%|██████████| 98/98 [00:03<00:00, 31.34it/s, gen_loss=1.02] Training Epoch 121/501: 100%|██████████| 98/98 [00:03<00:00, 29.92it/s, disc_loss=0.4, gen_loss=1.79] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 44.08586120605469, KID: 0.03491782024502754
Training Epoch 122/501: 100%|██████████| 98/98 [00:02<00:00, 33.89it/s, gen_loss=0.219] Training Epoch 123/501: 100%|██████████| 98/98 [00:02<00:00, 47.78it/s, disc_loss=0.465, gen_loss=2.14] Training Epoch 124/501: 100%|██████████| 98/98 [00:03<00:00, 29.54it/s, gen_loss=1.42] Training Epoch 125/501: 100%|██████████| 98/98 [00:03<00:00, 31.11it/s, disc_loss=0.906, gen_loss=1.31] Training Epoch 126/501: 100%|██████████| 98/98 [00:03<00:00, 29.22it/s, disc_loss=0.436, gen_loss=2.38] Training Epoch 127/501: 100%|██████████| 98/98 [00:03<00:00, 31.46it/s, disc_loss=0.415, gen_loss=1.82] Training Epoch 128/501: 100%|██████████| 98/98 [00:02<00:00, 40.29it/s, gen_loss=0.0367] Training Epoch 129/501: 100%|██████████| 98/98 [00:03<00:00, 29.01it/s, disc_loss=0.422, gen_loss=1.48] Training Epoch 130/501: 100%|██████████| 98/98 [00:03<00:00, 29.60it/s, disc_loss=0.447, gen_loss=2.11] Training Epoch 131/501: 100%|██████████| 98/98 [00:03<00:00, 30.92it/s, gen_loss=1.56] Training Epoch 132/501: 100%|██████████| 98/98 [00:02<00:00, 35.52it/s, gen_loss=2.4] Training Epoch 133/501: 100%|██████████| 98/98 [00:02<00:00, 39.24it/s, gen_loss=0.0539] Training Epoch 134/501: 100%|██████████| 98/98 [00:02<00:00, 33.30it/s, disc_loss=0.411, gen_loss=1.62] Training Epoch 135/501: 100%|██████████| 98/98 [00:03<00:00, 29.88it/s, disc_loss=0.459, gen_loss=1.71] Training Epoch 136/501: 100%|██████████| 98/98 [00:03<00:00, 29.45it/s, disc_loss=0.416, gen_loss=1.82] Training Epoch 137/501: 100%|██████████| 98/98 [00:03<00:00, 26.72it/s, disc_loss=0.401, gen_loss=2.02] Training Epoch 138/501: 100%|██████████| 98/98 [00:02<00:00, 36.64it/s, gen_loss=0.153] Training Epoch 139/501: 100%|██████████| 98/98 [00:02<00:00, 39.60it/s, disc_loss=0.413, gen_loss=1.65] Training Epoch 140/501: 100%|██████████| 98/98 [00:02<00:00, 33.12it/s, gen_loss=2.06] Training Epoch 141/501: 100%|██████████| 98/98 [00:03<00:00, 27.72it/s, disc_loss=0.408, gen_loss=2.13] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 47.73335647583008, KID: 0.037623703479766846
Training Epoch 142/501: 100%|██████████| 98/98 [00:03<00:00, 30.58it/s, disc_loss=0.419, gen_loss=1.87] Training Epoch 143/501: 100%|██████████| 98/98 [00:03<00:00, 30.69it/s, gen_loss=1.03] Training Epoch 144/501: 100%|██████████| 98/98 [00:01<00:00, 49.38it/s, disc_loss=1.81, gen_loss=2.89] Training Epoch 145/501: 100%|██████████| 98/98 [00:03<00:00, 26.44it/s, gen_loss=1.18] Training Epoch 146/501: 100%|██████████| 98/98 [00:03<00:00, 29.02it/s, gen_loss=3.67] Training Epoch 147/501: 100%|██████████| 98/98 [00:03<00:00, 28.79it/s, disc_loss=0.414, gen_loss=2.34] Training Epoch 148/501: 100%|██████████| 98/98 [00:03<00:00, 32.32it/s, disc_loss=0.488, gen_loss=2.53] Training Epoch 149/501: 100%|██████████| 98/98 [00:02<00:00, 42.32it/s, gen_loss=0.0304] Training Epoch 150/501: 100%|██████████| 98/98 [00:03<00:00, 30.49it/s, disc_loss=0.442, gen_loss=1.78] Training Epoch 151/501: 100%|██████████| 98/98 [00:03<00:00, 29.02it/s, gen_loss=1.76]
Training Epoch 152/501: 100%|██████████| 98/98 [00:03<00:00, 27.78it/s, disc_loss=0.414, gen_loss=2.44] Training Epoch 153/501: 100%|██████████| 98/98 [00:03<00:00, 29.71it/s, gen_loss=0.378] Training Epoch 154/501: 100%|██████████| 98/98 [00:02<00:00, 37.75it/s, gen_loss=0.0752] Training Epoch 155/501: 100%|██████████| 98/98 [00:02<00:00, 37.52it/s, disc_loss=0.402, gen_loss=1.63] Training Epoch 156/501: 100%|██████████| 98/98 [00:03<00:00, 28.49it/s, disc_loss=0.413, gen_loss=1.68] Training Epoch 157/501: 100%|██████████| 98/98 [00:02<00:00, 33.91it/s, disc_loss=1.03, gen_loss=3.41] Training Epoch 158/501: 100%|██████████| 98/98 [00:03<00:00, 29.26it/s, disc_loss=0.408, gen_loss=2.28] Training Epoch 159/501: 100%|██████████| 98/98 [00:02<00:00, 34.04it/s, gen_loss=0.127] Training Epoch 160/501: 100%|██████████| 98/98 [00:02<00:00, 41.74it/s, disc_loss=0.377, gen_loss=1.98] Training Epoch 161/501: 100%|██████████| 98/98 [00:03<00:00, 29.30it/s, disc_loss=0.58, gen_loss=0.618] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 48.613922119140625, KID: 0.04040231928229332
Training Epoch 162/501: 100%|██████████| 98/98 [00:03<00:00, 27.21it/s, disc_loss=0.402, gen_loss=1.94] Training Epoch 163/501: 100%|██████████| 98/98 [00:03<00:00, 29.04it/s, gen_loss=0.409] Training Epoch 164/501: 100%|██████████| 98/98 [00:03<00:00, 30.01it/s, disc_loss=0.439, gen_loss=2.12] Training Epoch 165/501: 100%|██████████| 98/98 [00:02<00:00, 48.36it/s, gen_loss=0.0313] Training Epoch 166/501: 100%|██████████| 98/98 [00:03<00:00, 26.92it/s, disc_loss=0.397, gen_loss=2.24] Training Epoch 167/501: 100%|██████████| 98/98 [00:03<00:00, 28.88it/s, gen_loss=1.87] Training Epoch 168/501: 100%|██████████| 98/98 [00:03<00:00, 29.59it/s, disc_loss=0.379, gen_loss=2.41] Training Epoch 169/501: 100%|██████████| 98/98 [00:03<00:00, 29.73it/s, disc_loss=0.461, gen_loss=1.71] Training Epoch 170/501: 100%|██████████| 98/98 [00:02<00:00, 41.97it/s, gen_loss=0.0366] Training Epoch 171/501: 100%|██████████| 98/98 [00:03<00:00, 31.60it/s, disc_loss=0.394, gen_loss=2.22] Training Epoch 172/501: 100%|██████████| 98/98 [00:03<00:00, 30.90it/s, disc_loss=0.428, gen_loss=1.49] Training Epoch 173/501: 100%|██████████| 98/98 [00:03<00:00, 29.75it/s, disc_loss=0.413, gen_loss=1.93] Training Epoch 174/501: 100%|██████████| 98/98 [00:03<00:00, 28.47it/s, gen_loss=1.1] Training Epoch 175/501: 100%|██████████| 98/98 [00:02<00:00, 38.71it/s, gen_loss=0.0558] Training Epoch 176/501: 100%|██████████| 98/98 [00:02<00:00, 36.34it/s, disc_loss=0.446, gen_loss=1.74] Training Epoch 177/501: 100%|██████████| 98/98 [00:03<00:00, 28.58it/s, disc_loss=0.463, gen_loss=2.27] Training Epoch 178/501: 100%|██████████| 98/98 [00:03<00:00, 27.63it/s, disc_loss=0.462, gen_loss=1.52] Training Epoch 179/501: 100%|██████████| 98/98 [00:03<00:00, 29.36it/s, disc_loss=0.413, gen_loss=1.52] Training Epoch 180/501: 100%|██████████| 98/98 [00:02<00:00, 33.28it/s, gen_loss=0.238] Training Epoch 181/501: 100%|██████████| 98/98 [00:02<00:00, 45.78it/s, disc_loss=0.405, gen_loss=1.82] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 61.84033966064453, KID: 0.05044092237949371
Training Epoch 182/501: 100%|██████████| 98/98 [00:03<00:00, 27.78it/s, gen_loss=0.286] Training Epoch 183/501: 100%|██████████| 98/98 [00:03<00:00, 28.19it/s, disc_loss=0.43, gen_loss=2.64] Training Epoch 184/501: 100%|██████████| 98/98 [00:03<00:00, 31.97it/s, gen_loss=1.45] Training Epoch 185/501: 100%|██████████| 98/98 [00:02<00:00, 33.90it/s, disc_loss=0.42, gen_loss=1.69] Training Epoch 186/501: 100%|██████████| 98/98 [00:02<00:00, 48.77it/s, gen_loss=0.047] Training Epoch 187/501: 100%|██████████| 98/98 [00:03<00:00, 26.50it/s, disc_loss=0.401, gen_loss=1.75] Training Epoch 188/501: 100%|██████████| 98/98 [00:03<00:00, 31.16it/s, disc_loss=0.401, gen_loss=1.81] Training Epoch 189/501: 100%|██████████| 98/98 [00:03<00:00, 28.94it/s, disc_loss=0.443, gen_loss=2.49] Training Epoch 190/501: 100%|██████████| 98/98 [00:03<00:00, 31.30it/s, disc_loss=1.05, gen_loss=2.5] Training Epoch 191/501: 100%|██████████| 98/98 [00:02<00:00, 38.56it/s, gen_loss=0.0643] Training Epoch 192/501: 100%|██████████| 98/98 [00:02<00:00, 37.54it/s, disc_loss=0.375, gen_loss=2.18] Training Epoch 193/501: 100%|██████████| 98/98 [00:03<00:00, 31.12it/s, disc_loss=0.407, gen_loss=1.57] Training Epoch 194/501: 100%|██████████| 98/98 [00:03<00:00, 30.56it/s, disc_loss=0.4, gen_loss=2.01] Training Epoch 195/501: 100%|██████████| 98/98 [00:03<00:00, 27.17it/s, disc_loss=0.421, gen_loss=1.53] Training Epoch 196/501: 100%|██████████| 98/98 [00:02<00:00, 34.38it/s, gen_loss=0.11] Training Epoch 197/501: 100%|██████████| 98/98 [00:02<00:00, 38.63it/s, disc_loss=0.419, gen_loss=2.12] Training Epoch 198/501: 100%|██████████| 98/98 [00:03<00:00, 28.99it/s, disc_loss=0.394, gen_loss=2.05] Training Epoch 199/501: 100%|██████████| 98/98 [00:03<00:00, 27.77it/s, disc_loss=0.419, gen_loss=1.58] Training Epoch 200/501: 100%|██████████| 98/98 [00:03<00:00, 30.16it/s, disc_loss=0.421, gen_loss=1.64] Training Epoch 201/501: 100%|██████████| 98/98 [00:03<00:00, 31.48it/s, gen_loss=0.41] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 48.21028518676758, KID: 0.037098322063684464
Training Epoch 202/501: 100%|██████████| 98/98 [00:01<00:00, 51.65it/s, disc_loss=0.879, gen_loss=2.47] Training Epoch 203/501: 100%|██████████| 98/98 [00:04<00:00, 24.45it/s, disc_loss=0.42, gen_loss=1.88] Training Epoch 204/501: 100%|██████████| 98/98 [00:02<00:00, 32.87it/s, gen_loss=0.737] Training Epoch 205/501: 100%|██████████| 98/98 [00:03<00:00, 28.81it/s, disc_loss=0.427, gen_loss=1.57] Training Epoch 206/501: 100%|██████████| 98/98 [00:03<00:00, 32.25it/s, disc_loss=0.423, gen_loss=1.82] Training Epoch 207/501: 100%|██████████| 98/98 [00:02<00:00, 44.40it/s, gen_loss=0.0398] Training Epoch 208/501: 100%|██████████| 98/98 [00:03<00:00, 31.46it/s, disc_loss=0.417, gen_loss=2.23] Training Epoch 209/501: 100%|██████████| 98/98 [00:03<00:00, 31.47it/s, disc_loss=0.424, gen_loss=1.87] Training Epoch 210/501: 100%|██████████| 98/98 [00:02<00:00, 34.56it/s, disc_loss=0.439, gen_loss=1.97] Training Epoch 211/501: 100%|██████████| 98/98 [00:03<00:00, 29.69it/s, gen_loss=0.264] Training Epoch 212/501: 100%|██████████| 98/98 [00:02<00:00, 35.01it/s, gen_loss=0.0555] Training Epoch 213/501: 100%|██████████| 98/98 [00:02<00:00, 42.72it/s, disc_loss=0.445, gen_loss=1.4] Training Epoch 214/501: 100%|██████████| 98/98 [00:03<00:00, 30.19it/s, disc_loss=0.405, gen_loss=1.92] Training Epoch 215/501: 100%|██████████| 98/98 [00:03<00:00, 29.12it/s, gen_loss=0.432] Training Epoch 216/501: 100%|██████████| 98/98 [00:03<00:00, 28.72it/s, disc_loss=0.408, gen_loss=1.83] Training Epoch 217/501: 100%|██████████| 98/98 [00:02<00:00, 34.71it/s, gen_loss=0.115] Training Epoch 218/501: 100%|██████████| 98/98 [00:02<00:00, 43.66it/s, disc_loss=0.413, gen_loss=1.9] Training Epoch 219/501: 100%|██████████| 98/98 [00:03<00:00, 28.85it/s, gen_loss=0.398] Training Epoch 220/501: 100%|██████████| 98/98 [00:03<00:00, 28.03it/s, disc_loss=0.418, gen_loss=2.34] Training Epoch 221/501: 100%|██████████| 98/98 [00:03<00:00, 31.66it/s, gen_loss=0.571] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 49.84903335571289, KID: 0.04004843160510063
Training Epoch 222/501: 100%|██████████| 98/98 [00:03<00:00, 28.27it/s, gen_loss=1.09] Training Epoch 223/501: 100%|██████████| 98/98 [00:01<00:00, 49.59it/s, gen_loss=0.0316] Training Epoch 224/501: 100%|██████████| 98/98 [00:03<00:00, 25.68it/s, disc_loss=0.39, gen_loss=1.81] Training Epoch 225/501: 100%|██████████| 98/98 [00:03<00:00, 31.21it/s, disc_loss=0.416, gen_loss=2.16] Training Epoch 226/501: 100%|██████████| 98/98 [00:03<00:00, 29.97it/s, disc_loss=0.381, gen_loss=2.26] Training Epoch 227/501: 100%|██████████| 98/98 [00:03<00:00, 30.13it/s, disc_loss=0.407, gen_loss=2.53] Training Epoch 228/501: 100%|██████████| 98/98 [00:02<00:00, 41.09it/s, gen_loss=0.0701] Training Epoch 229/501: 100%|██████████| 98/98 [00:03<00:00, 29.00it/s, disc_loss=0.384, gen_loss=1.84] Training Epoch 230/501: 100%|██████████| 98/98 [00:03<00:00, 30.10it/s, disc_loss=0.381, gen_loss=2.1] Training Epoch 231/501: 100%|██████████| 98/98 [00:03<00:00, 30.00it/s, disc_loss=0.414, gen_loss=2.06] Training Epoch 232/501: 100%|██████████| 98/98 [00:03<00:00, 28.89it/s, gen_loss=1.86] Training Epoch 233/501: 100%|██████████| 98/98 [00:02<00:00, 39.35it/s, gen_loss=0.0557] Training Epoch 234/501: 100%|██████████| 98/98 [00:02<00:00, 38.06it/s, disc_loss=0.418, gen_loss=1.79] Training Epoch 235/501: 100%|██████████| 98/98 [00:03<00:00, 29.29it/s, disc_loss=0.402, gen_loss=1.95] Training Epoch 236/501: 100%|██████████| 98/98 [00:03<00:00, 28.30it/s, disc_loss=0.379, gen_loss=2.03] Training Epoch 237/501: 100%|██████████| 98/98 [00:03<00:00, 30.04it/s, disc_loss=0.401, gen_loss=2.22] Training Epoch 238/501: 100%|██████████| 98/98 [00:03<00:00, 30.25it/s, gen_loss=0.263] Training Epoch 239/501: 100%|██████████| 98/98 [00:02<00:00, 48.06it/s, disc_loss=0.4, gen_loss=1.86] Training Epoch 240/501: 100%|██████████| 98/98 [00:03<00:00, 28.81it/s, gen_loss=0.722] Training Epoch 241/501: 100%|██████████| 98/98 [00:03<00:00, 29.73it/s, gen_loss=1.03] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 48.531776428222656, KID: 0.03834521397948265
Training Epoch 242/501: 100%|██████████| 98/98 [00:03<00:00, 28.13it/s, gen_loss=1.32] Training Epoch 243/501: 100%|██████████| 98/98 [00:03<00:00, 30.78it/s, disc_loss=0.391, gen_loss=2.06] Training Epoch 244/501: 100%|██████████| 98/98 [00:02<00:00, 45.87it/s, gen_loss=0.0502] Training Epoch 245/501: 100%|██████████| 98/98 [00:03<00:00, 29.20it/s, disc_loss=0.443, gen_loss=1.12] Training Epoch 246/501: 100%|██████████| 98/98 [00:03<00:00, 32.07it/s, disc_loss=0.428, gen_loss=2.06] Training Epoch 247/501: 100%|██████████| 98/98 [00:03<00:00, 28.18it/s, disc_loss=0.398, gen_loss=1.95] Training Epoch 248/501: 100%|██████████| 98/98 [00:03<00:00, 30.18it/s, gen_loss=0.281] Training Epoch 249/501: 100%|██████████| 98/98 [00:02<00:00, 37.32it/s, gen_loss=0.0696] Training Epoch 250/501: 100%|██████████| 98/98 [00:02<00:00, 35.78it/s, disc_loss=0.388, gen_loss=1.93] Training Epoch 251/501: 100%|██████████| 98/98 [00:03<00:00, 29.34it/s, disc_loss=0.405, gen_loss=2.03]
Training Epoch 252/501: 100%|██████████| 98/98 [00:03<00:00, 30.00it/s, disc_loss=0.883, gen_loss=3.04] Training Epoch 253/501: 100%|██████████| 98/98 [00:03<00:00, 28.37it/s, disc_loss=0.417, gen_loss=1.46] Training Epoch 254/501: 100%|██████████| 98/98 [00:02<00:00, 36.22it/s, gen_loss=0.148] Training Epoch 255/501: 100%|██████████| 98/98 [00:02<00:00, 42.75it/s, disc_loss=0.394, gen_loss=2.06] Training Epoch 256/501: 100%|██████████| 98/98 [00:03<00:00, 30.02it/s, gen_loss=0.223] Training Epoch 257/501: 100%|██████████| 98/98 [00:03<00:00, 28.09it/s, disc_loss=0.482, gen_loss=2.78] Training Epoch 258/501: 100%|██████████| 98/98 [00:03<00:00, 28.71it/s, disc_loss=0.431, gen_loss=1.64] Training Epoch 259/501: 100%|██████████| 98/98 [00:02<00:00, 32.74it/s, gen_loss=0.621] Training Epoch 260/501: 100%|██████████| 98/98 [00:01<00:00, 50.63it/s, gen_loss=0.0319] Training Epoch 261/501: 100%|██████████| 98/98 [00:03<00:00, 28.73it/s, disc_loss=0.437, gen_loss=2.55] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 45.05347442626953, KID: 0.03554515913128853
Training Epoch 262/501: 100%|██████████| 98/98 [00:03<00:00, 29.74it/s, disc_loss=0.395, gen_loss=1.66] Training Epoch 263/501: 100%|██████████| 98/98 [00:03<00:00, 32.14it/s, disc_loss=0.395, gen_loss=1.94] Training Epoch 264/501: 100%|██████████| 98/98 [00:03<00:00, 32.05it/s, disc_loss=0.399, gen_loss=2.12] Training Epoch 265/501: 100%|██████████| 98/98 [00:02<00:00, 42.36it/s, gen_loss=0.076] Training Epoch 266/501: 100%|██████████| 98/98 [00:03<00:00, 31.54it/s, disc_loss=0.407, gen_loss=2.07] Training Epoch 267/501: 100%|██████████| 98/98 [00:03<00:00, 29.12it/s, disc_loss=0.388, gen_loss=1.76] Training Epoch 268/501: 100%|██████████| 98/98 [00:03<00:00, 29.53it/s, disc_loss=0.388, gen_loss=1.89] Training Epoch 269/501: 100%|██████████| 98/98 [00:03<00:00, 30.50it/s, gen_loss=0.382] Training Epoch 270/501: 100%|██████████| 98/98 [00:02<00:00, 41.98it/s, gen_loss=0.0707] Training Epoch 271/501: 100%|██████████| 98/98 [00:02<00:00, 35.07it/s, disc_loss=0.404, gen_loss=2.15] Training Epoch 272/501: 100%|██████████| 98/98 [00:03<00:00, 30.66it/s, disc_loss=0.386, gen_loss=2.39] Training Epoch 273/501: 100%|██████████| 98/98 [00:03<00:00, 28.72it/s, gen_loss=0.646] Training Epoch 274/501: 100%|██████████| 98/98 [00:03<00:00, 29.25it/s, gen_loss=1.75] Training Epoch 275/501: 100%|██████████| 98/98 [00:02<00:00, 40.85it/s, gen_loss=0.25] Training Epoch 276/501: 100%|██████████| 98/98 [00:01<00:00, 49.50it/s, disc_loss=0.411, gen_loss=1.88] Training Epoch 277/501: 100%|██████████| 98/98 [00:03<00:00, 29.96it/s, gen_loss=1.05] Training Epoch 278/501: 100%|██████████| 98/98 [00:03<00:00, 28.23it/s, disc_loss=0.402, gen_loss=1.89] Training Epoch 279/501: 100%|██████████| 98/98 [00:03<00:00, 31.01it/s, disc_loss=0.418, gen_loss=2.42] Training Epoch 280/501: 100%|██████████| 98/98 [00:03<00:00, 30.14it/s, disc_loss=0.403, gen_loss=1.78] Training Epoch 281/501: 100%|██████████| 98/98 [00:02<00:00, 46.53it/s, gen_loss=0.0398] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 62.177711486816406, KID: 0.058163534849882126
Training Epoch 282/501: 100%|██████████| 98/98 [00:03<00:00, 26.76it/s, disc_loss=0.417, gen_loss=1.83] Training Epoch 283/501: 100%|██████████| 98/98 [00:03<00:00, 27.60it/s, disc_loss=0.389, gen_loss=1.86] Training Epoch 284/501: 100%|██████████| 98/98 [00:03<00:00, 28.61it/s, disc_loss=0.403, gen_loss=1.62] Training Epoch 285/501: 100%|██████████| 98/98 [00:02<00:00, 37.31it/s, gen_loss=0.385] Training Epoch 286/501: 100%|██████████| 98/98 [00:02<00:00, 44.04it/s, gen_loss=0.0938] Training Epoch 287/501: 100%|██████████| 98/98 [00:02<00:00, 41.70it/s, disc_loss=0.408, gen_loss=1.72] Training Epoch 288/501: 100%|██████████| 98/98 [00:03<00:00, 29.40it/s, disc_loss=0.385, gen_loss=2.42] Training Epoch 289/501: 100%|██████████| 98/98 [00:03<00:00, 27.76it/s, disc_loss=0.402, gen_loss=2.42] Training Epoch 290/501: 100%|██████████| 98/98 [00:03<00:00, 28.05it/s, disc_loss=0.535, gen_loss=2.67] Training Epoch 291/501: 100%|██████████| 98/98 [00:02<00:00, 41.20it/s, gen_loss=0.103] Training Epoch 292/501: 100%|██████████| 98/98 [00:02<00:00, 48.44it/s, disc_loss=0.381, gen_loss=1.78] Training Epoch 293/501: 100%|██████████| 98/98 [00:03<00:00, 30.20it/s, gen_loss=0.336] Training Epoch 294/501: 100%|██████████| 98/98 [00:03<00:00, 27.90it/s, disc_loss=0.421, gen_loss=2.43] Training Epoch 295/501: 100%|██████████| 98/98 [00:03<00:00, 27.26it/s, disc_loss=0.427, gen_loss=2.48] Training Epoch 296/501: 100%|██████████| 98/98 [00:03<00:00, 32.58it/s, gen_loss=0.473] Training Epoch 297/501: 100%|██████████| 98/98 [00:01<00:00, 51.97it/s, gen_loss=0.0367] Training Epoch 298/501: 100%|██████████| 98/98 [00:03<00:00, 25.31it/s, disc_loss=0.39, gen_loss=2.55] Training Epoch 299/501: 100%|██████████| 98/98 [00:03<00:00, 28.48it/s, disc_loss=0.388, gen_loss=2.04] Training Epoch 300/501: 100%|██████████| 98/98 [00:03<00:00, 27.97it/s, disc_loss=0.411, gen_loss=2.22] Training Epoch 301/501: 100%|██████████| 98/98 [00:03<00:00, 29.94it/s, disc_loss=0.398, gen_loss=2.28] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 51.08354949951172, KID: 0.03681596368551254
Training Epoch 302/501: 100%|██████████| 98/98 [00:02<00:00, 42.54it/s, gen_loss=0.0389] Training Epoch 303/501: 100%|██████████| 98/98 [00:03<00:00, 28.59it/s, disc_loss=0.407, gen_loss=1.85] Training Epoch 304/501: 100%|██████████| 98/98 [00:03<00:00, 26.56it/s, disc_loss=0.409, gen_loss=1.44] Training Epoch 305/501: 100%|██████████| 98/98 [00:04<00:00, 24.07it/s, disc_loss=0.442, gen_loss=2.07] Training Epoch 306/501: 100%|██████████| 98/98 [00:03<00:00, 26.46it/s, gen_loss=1.84] Training Epoch 307/501: 100%|██████████| 98/98 [00:02<00:00, 39.62it/s, gen_loss=0.0913] Training Epoch 308/501: 100%|██████████| 98/98 [00:02<00:00, 33.72it/s, disc_loss=0.381, gen_loss=2.29] Training Epoch 309/501: 100%|██████████| 98/98 [00:03<00:00, 27.31it/s, disc_loss=0.436, gen_loss=2.09] Training Epoch 310/501: 100%|██████████| 98/98 [00:03<00:00, 28.98it/s, gen_loss=0.423] Training Epoch 311/501: 100%|██████████| 98/98 [00:03<00:00, 27.91it/s, gen_loss=1.71] Training Epoch 312/501: 100%|██████████| 98/98 [00:02<00:00, 40.64it/s, gen_loss=0.226] Training Epoch 313/501: 100%|██████████| 98/98 [00:02<00:00, 41.70it/s, disc_loss=0.391, gen_loss=1.87] Training Epoch 314/501: 100%|██████████| 98/98 [00:03<00:00, 25.80it/s, disc_loss=0.379, gen_loss=2.44] Training Epoch 315/501: 100%|██████████| 98/98 [00:03<00:00, 29.05it/s, disc_loss=0.388, gen_loss=2.23] Training Epoch 316/501: 100%|██████████| 98/98 [00:03<00:00, 29.89it/s, disc_loss=0.388, gen_loss=1.63] Training Epoch 317/501: 100%|██████████| 98/98 [00:03<00:00, 31.34it/s, disc_loss=0.416, gen_loss=2.44] Training Epoch 318/501: 100%|██████████| 98/98 [00:01<00:00, 49.63it/s, gen_loss=0.0356] Training Epoch 319/501: 100%|██████████| 98/98 [00:02<00:00, 35.28it/s, disc_loss=0.379, gen_loss=2] Training Epoch 320/501: 100%|██████████| 98/98 [00:02<00:00, 35.23it/s, disc_loss=0.376, gen_loss=1.97] Training Epoch 321/501: 100%|██████████| 98/98 [00:03<00:00, 30.04it/s, disc_loss=0.379, gen_loss=2.24] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 70.41686248779297, KID: 0.06263531744480133
Training Epoch 322/501: 100%|██████████| 98/98 [00:03<00:00, 27.78it/s, gen_loss=0.154] Training Epoch 323/501: 100%|██████████| 98/98 [00:02<00:00, 40.10it/s, gen_loss=0.0601] Training Epoch 324/501: 100%|██████████| 98/98 [00:03<00:00, 31.83it/s, disc_loss=0.386, gen_loss=2.1] Training Epoch 325/501: 100%|██████████| 98/98 [00:03<00:00, 31.54it/s, disc_loss=0.393, gen_loss=2.14] Training Epoch 326/501: 100%|██████████| 98/98 [00:03<00:00, 28.58it/s, disc_loss=0.412, gen_loss=1.93] Training Epoch 327/501: 100%|██████████| 98/98 [00:03<00:00, 27.12it/s, disc_loss=1.19, gen_loss=2.89] Training Epoch 328/501: 100%|██████████| 98/98 [00:02<00:00, 39.57it/s, gen_loss=0.142] Training Epoch 329/501: 100%|██████████| 98/98 [00:02<00:00, 42.93it/s, disc_loss=0.398, gen_loss=1.73] Training Epoch 330/501: 100%|██████████| 98/98 [00:03<00:00, 27.17it/s, gen_loss=0.478] Training Epoch 331/501: 100%|██████████| 98/98 [00:03<00:00, 28.77it/s, disc_loss=0.384, gen_loss=2.36] Training Epoch 332/501: 100%|██████████| 98/98 [00:03<00:00, 31.86it/s, disc_loss=0.418, gen_loss=1.81] Training Epoch 333/501: 100%|██████████| 98/98 [00:02<00:00, 34.46it/s, gen_loss=0.364] Training Epoch 334/501: 100%|██████████| 98/98 [00:01<00:00, 57.02it/s, gen_loss=0.0329] Training Epoch 335/501: 100%|██████████| 98/98 [00:03<00:00, 27.64it/s, disc_loss=0.413, gen_loss=1.96] Training Epoch 336/501: 100%|██████████| 98/98 [00:03<00:00, 29.64it/s, disc_loss=0.399, gen_loss=1.79] Training Epoch 337/501: 100%|██████████| 98/98 [00:03<00:00, 30.19it/s, disc_loss=0.386, gen_loss=2] Training Epoch 338/501: 100%|██████████| 98/98 [00:02<00:00, 33.55it/s, disc_loss=0.381, gen_loss=2.04] Training Epoch 339/501: 100%|██████████| 98/98 [00:02<00:00, 40.59it/s, gen_loss=0.0539] Training Epoch 340/501: 100%|██████████| 98/98 [00:03<00:00, 32.63it/s, disc_loss=0.401, gen_loss=1.5] Training Epoch 341/501: 100%|██████████| 98/98 [00:03<00:00, 31.28it/s, disc_loss=0.388, gen_loss=2.02] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 51.100563049316406, KID: 0.038681741803884506
Training Epoch 342/501: 100%|██████████| 98/98 [00:03<00:00, 27.93it/s, disc_loss=0.399, gen_loss=1.89] Training Epoch 343/501: 100%|██████████| 98/98 [00:03<00:00, 28.94it/s, gen_loss=1.3] Training Epoch 344/501: 100%|██████████| 98/98 [00:02<00:00, 42.99it/s, gen_loss=0.0812] Training Epoch 345/501: 100%|██████████| 98/98 [00:02<00:00, 37.36it/s, disc_loss=0.387, gen_loss=1.92] Training Epoch 346/501: 100%|██████████| 98/98 [00:02<00:00, 32.68it/s, disc_loss=0.984, gen_loss=4] Training Epoch 347/501: 100%|██████████| 98/98 [00:03<00:00, 26.74it/s, gen_loss=0.952] Training Epoch 348/501: 100%|██████████| 98/98 [00:03<00:00, 24.98it/s, disc_loss=0.393, gen_loss=1.84] Training Epoch 349/501: 100%|██████████| 98/98 [00:02<00:00, 37.48it/s, gen_loss=0.236] Training Epoch 350/501: 100%|██████████| 98/98 [00:02<00:00, 44.69it/s, disc_loss=0.427, gen_loss=1.72] Training Epoch 351/501: 100%|██████████| 98/98 [00:03<00:00, 27.02it/s, disc_loss=0.412, gen_loss=2.41]
Training Epoch 352/501: 100%|██████████| 98/98 [00:03<00:00, 29.77it/s, disc_loss=0.384, gen_loss=2.43] Training Epoch 353/501: 100%|██████████| 98/98 [00:03<00:00, 28.27it/s, disc_loss=0.39, gen_loss=2.06] Training Epoch 354/501: 100%|██████████| 98/98 [00:03<00:00, 30.31it/s, disc_loss=0.398, gen_loss=2.11] Training Epoch 355/501: 100%|██████████| 98/98 [00:01<00:00, 49.67it/s, gen_loss=0.0529] Training Epoch 356/501: 100%|██████████| 98/98 [00:03<00:00, 29.37it/s, disc_loss=0.414, gen_loss=2.31] Training Epoch 357/501: 100%|██████████| 98/98 [00:03<00:00, 30.17it/s, disc_loss=0.417, gen_loss=1.73] Training Epoch 358/501: 100%|██████████| 98/98 [00:03<00:00, 28.23it/s, disc_loss=0.39, gen_loss=1.85] Training Epoch 359/501: 100%|██████████| 98/98 [00:03<00:00, 31.13it/s, gen_loss=0.413] Training Epoch 360/501: 100%|██████████| 98/98 [00:02<00:00, 42.58it/s, gen_loss=0.102] Training Epoch 361/501: 100%|██████████| 98/98 [00:02<00:00, 34.21it/s, disc_loss=0.389, gen_loss=2.03] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 53.4597053527832, KID: 0.039917852729558945
Training Epoch 362/501: 100%|██████████| 98/98 [00:03<00:00, 27.98it/s, disc_loss=0.427, gen_loss=1.5] Training Epoch 363/501: 100%|██████████| 98/98 [00:03<00:00, 26.98it/s, disc_loss=0.454, gen_loss=2.08] Training Epoch 364/501: 100%|██████████| 98/98 [00:03<00:00, 27.62it/s, gen_loss=0.501] Training Epoch 365/501: 100%|██████████| 98/98 [00:02<00:00, 38.99it/s, gen_loss=0.132] Training Epoch 366/501: 100%|██████████| 98/98 [00:02<00:00, 40.66it/s, disc_loss=0.374, gen_loss=1.64] Training Epoch 367/501: 100%|██████████| 98/98 [00:03<00:00, 26.96it/s, gen_loss=0.725] Training Epoch 368/501: 100%|██████████| 98/98 [00:03<00:00, 28.38it/s, disc_loss=0.42, gen_loss=1.52] Training Epoch 369/501: 100%|██████████| 98/98 [00:03<00:00, 28.25it/s, disc_loss=0.382, gen_loss=2.33] Training Epoch 370/501: 100%|██████████| 98/98 [00:02<00:00, 36.00it/s, gen_loss=0.644] Training Epoch 371/501: 100%|██████████| 98/98 [00:02<00:00, 47.08it/s, gen_loss=0.0593] Training Epoch 372/501: 100%|██████████| 98/98 [00:03<00:00, 30.18it/s, disc_loss=0.423, gen_loss=1.69] Training Epoch 373/501: 100%|██████████| 98/98 [00:03<00:00, 31.24it/s, disc_loss=0.373, gen_loss=2.07] Training Epoch 374/501: 100%|██████████| 98/98 [00:03<00:00, 28.41it/s, disc_loss=0.371, gen_loss=1.98] Training Epoch 375/501: 100%|██████████| 98/98 [00:02<00:00, 33.74it/s, gen_loss=0.195] Training Epoch 376/501: 100%|██████████| 98/98 [00:02<00:00, 42.06it/s, gen_loss=0.0923] Training Epoch 377/501: 100%|██████████| 98/98 [00:03<00:00, 30.95it/s, disc_loss=0.376, gen_loss=1.73] Training Epoch 378/501: 100%|██████████| 98/98 [00:03<00:00, 30.46it/s, disc_loss=0.401, gen_loss=1.66] Training Epoch 379/501: 100%|██████████| 98/98 [00:03<00:00, 26.38it/s, disc_loss=0.367, gen_loss=2.04] Training Epoch 380/501: 100%|██████████| 98/98 [00:03<00:00, 27.28it/s, disc_loss=0.375, gen_loss=1.72] Training Epoch 381/501: 100%|██████████| 98/98 [00:02<00:00, 42.64it/s, gen_loss=0.0707] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 51.53538513183594, KID: 0.0344519168138504
Training Epoch 382/501: 100%|██████████| 98/98 [00:02<00:00, 40.68it/s, disc_loss=0.404, gen_loss=1.9] Training Epoch 383/501: 100%|██████████| 98/98 [00:03<00:00, 27.46it/s, gen_loss=0.353] Training Epoch 384/501: 100%|██████████| 98/98 [00:03<00:00, 25.13it/s, disc_loss=0.378, gen_loss=1.73] Training Epoch 385/501: 100%|██████████| 98/98 [00:03<00:00, 25.32it/s, disc_loss=0.393, gen_loss=1.68] Training Epoch 386/501: 100%|██████████| 98/98 [00:02<00:00, 38.91it/s, gen_loss=0.181] Training Epoch 387/501: 100%|██████████| 98/98 [00:02<00:00, 45.37it/s, disc_loss=0.4, gen_loss=1.91] Training Epoch 388/501: 100%|██████████| 98/98 [00:03<00:00, 26.27it/s, disc_loss=0.373, gen_loss=1.87] Training Epoch 389/501: 100%|██████████| 98/98 [00:03<00:00, 29.16it/s, disc_loss=0.38, gen_loss=1.79] Training Epoch 390/501: 100%|██████████| 98/98 [00:03<00:00, 27.63it/s, disc_loss=0.391, gen_loss=2] Training Epoch 391/501: 100%|██████████| 98/98 [00:02<00:00, 34.00it/s, disc_loss=0.385, gen_loss=2.56] Training Epoch 392/501: 100%|██████████| 98/98 [00:02<00:00, 48.23it/s, gen_loss=0.0537] Training Epoch 393/501: 100%|██████████| 98/98 [00:03<00:00, 29.76it/s, disc_loss=0.389, gen_loss=1.69] Training Epoch 394/501: 100%|██████████| 98/98 [00:03<00:00, 30.15it/s, disc_loss=0.397, gen_loss=2.21] Training Epoch 395/501: 100%|██████████| 98/98 [00:03<00:00, 26.29it/s, disc_loss=0.401, gen_loss=1.7] Training Epoch 396/501: 100%|██████████| 98/98 [00:02<00:00, 34.22it/s, gen_loss=0.775] Training Epoch 397/501: 100%|██████████| 98/98 [00:02<00:00, 44.54it/s, gen_loss=0.0726] Training Epoch 398/501: 100%|██████████| 98/98 [00:03<00:00, 31.61it/s, disc_loss=0.397, gen_loss=1.94] Training Epoch 399/501: 100%|██████████| 98/98 [00:03<00:00, 29.64it/s, disc_loss=0.469, gen_loss=2.81] Training Epoch 400/501: 100%|██████████| 98/98 [00:03<00:00, 26.52it/s, disc_loss=0.386, gen_loss=2.07] Training Epoch 401/501: 100%|██████████| 98/98 [00:03<00:00, 27.82it/s, gen_loss=0.529] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 45.403507232666016, KID: 0.030722767114639282
Training Epoch 402/501: 100%|██████████| 98/98 [00:02<00:00, 41.14it/s, gen_loss=0.165] Training Epoch 403/501: 100%|██████████| 98/98 [00:01<00:00, 50.22it/s, disc_loss=0.391, gen_loss=1.59] Training Epoch 404/501: 100%|██████████| 98/98 [00:03<00:00, 30.30it/s, disc_loss=0.389, gen_loss=2.34] Training Epoch 405/501: 100%|██████████| 98/98 [00:03<00:00, 30.68it/s, disc_loss=0.393, gen_loss=2.19] Training Epoch 406/501: 100%|██████████| 98/98 [00:03<00:00, 26.41it/s, disc_loss=0.377, gen_loss=1.9] Training Epoch 407/501: 100%|██████████| 98/98 [00:02<00:00, 35.70it/s, gen_loss=0.751] Training Epoch 408/501: 100%|██████████| 98/98 [00:01<00:00, 55.70it/s, gen_loss=0.0498] Training Epoch 409/501: 100%|██████████| 98/98 [00:03<00:00, 25.30it/s, disc_loss=0.376, gen_loss=2.09] Training Epoch 410/501: 100%|██████████| 98/98 [00:03<00:00, 29.51it/s, disc_loss=0.371, gen_loss=2.01] Training Epoch 411/501: 100%|██████████| 98/98 [00:04<00:00, 24.28it/s, disc_loss=0.385, gen_loss=2.27] Training Epoch 412/501: 100%|██████████| 98/98 [00:03<00:00, 30.78it/s, gen_loss=0.268] Training Epoch 413/501: 100%|██████████| 98/98 [00:02<00:00, 40.49it/s, gen_loss=0.0727] Training Epoch 414/501: 100%|██████████| 98/98 [00:03<00:00, 32.36it/s, disc_loss=0.391, gen_loss=2.25] Training Epoch 415/501: 100%|██████████| 98/98 [00:03<00:00, 28.76it/s, disc_loss=0.393, gen_loss=2.12] Training Epoch 416/501: 100%|██████████| 98/98 [00:04<00:00, 24.48it/s, disc_loss=0.374, gen_loss=2.21] Training Epoch 417/501: 100%|██████████| 98/98 [00:03<00:00, 28.19it/s, disc_loss=0.45, gen_loss=0.835] Training Epoch 418/501: 100%|██████████| 98/98 [00:02<00:00, 41.69it/s, gen_loss=0.142] Training Epoch 419/501: 100%|██████████| 98/98 [00:02<00:00, 38.04it/s, disc_loss=0.39, gen_loss=1.78] Training Epoch 420/501: 100%|██████████| 98/98 [00:03<00:00, 27.37it/s, gen_loss=0.744] Training Epoch 421/501: 100%|██████████| 98/98 [00:03<00:00, 28.69it/s, disc_loss=0.383, gen_loss=2.16] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 58.72304153442383, KID: 0.04249031841754913
Training Epoch 422/501: 100%|██████████| 98/98 [00:03<00:00, 26.47it/s, disc_loss=0.368, gen_loss=2.13] Training Epoch 423/501: 100%|██████████| 98/98 [00:02<00:00, 39.08it/s, gen_loss=0.208] Training Epoch 424/501: 100%|██████████| 98/98 [00:02<00:00, 47.49it/s, disc_loss=0.407, gen_loss=2.11] Training Epoch 425/501: 100%|██████████| 98/98 [00:03<00:00, 26.21it/s, disc_loss=0.437, gen_loss=1.72] Training Epoch 426/501: 100%|██████████| 98/98 [00:03<00:00, 29.23it/s, disc_loss=0.395, gen_loss=2.27] Training Epoch 427/501: 100%|██████████| 98/98 [00:03<00:00, 27.42it/s, disc_loss=0.379, gen_loss=2.03] Training Epoch 428/501: 100%|██████████| 98/98 [00:02<00:00, 34.99it/s, disc_loss=0.509, gen_loss=0.761] Training Epoch 429/501: 100%|██████████| 98/98 [00:02<00:00, 48.12it/s, gen_loss=0.0723] Training Epoch 430/501: 100%|██████████| 98/98 [00:03<00:00, 30.43it/s, disc_loss=0.384, gen_loss=1.76] Training Epoch 431/501: 100%|██████████| 98/98 [00:03<00:00, 28.44it/s, disc_loss=0.375, gen_loss=2.23] Training Epoch 432/501: 100%|██████████| 98/98 [00:03<00:00, 26.29it/s, disc_loss=0.369, gen_loss=2.27] Training Epoch 433/501: 100%|██████████| 98/98 [00:02<00:00, 33.50it/s, gen_loss=2.08] Training Epoch 434/501: 100%|██████████| 98/98 [00:01<00:00, 50.45it/s, gen_loss=0.0847] Training Epoch 435/501: 100%|██████████| 98/98 [00:02<00:00, 35.17it/s, disc_loss=0.372, gen_loss=1.89] Training Epoch 436/501: 100%|██████████| 98/98 [00:03<00:00, 29.28it/s, gen_loss=0.476] Training Epoch 437/501: 100%|██████████| 98/98 [00:03<00:00, 28.47it/s, disc_loss=0.378, gen_loss=1.81] Training Epoch 438/501: 100%|██████████| 98/98 [00:03<00:00, 26.78it/s, gen_loss=1.35] Training Epoch 439/501: 100%|██████████| 98/98 [00:02<00:00, 38.32it/s, gen_loss=0.2] Training Epoch 440/501: 100%|██████████| 98/98 [00:02<00:00, 48.05it/s, disc_loss=0.385, gen_loss=1.87] Training Epoch 441/501: 100%|██████████| 98/98 [00:03<00:00, 25.61it/s, disc_loss=0.369, gen_loss=2.05] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 47.76156997680664, KID: 0.029191993176937103
Training Epoch 442/501: 100%|██████████| 98/98 [00:03<00:00, 27.70it/s, disc_loss=0.367, gen_loss=2.16] Training Epoch 443/501: 100%|██████████| 98/98 [00:03<00:00, 25.76it/s, disc_loss=0.369, gen_loss=2.35] Training Epoch 444/501: 100%|██████████| 98/98 [00:02<00:00, 34.54it/s, gen_loss=1.89] Training Epoch 445/501: 100%|██████████| 98/98 [00:01<00:00, 50.49it/s, gen_loss=0.0795] Training Epoch 446/501: 100%|██████████| 98/98 [00:03<00:00, 29.80it/s, disc_loss=0.407, gen_loss=1.96] Training Epoch 447/501: 100%|██████████| 98/98 [00:03<00:00, 29.78it/s, disc_loss=0.397, gen_loss=1.43] Training Epoch 448/501: 100%|██████████| 98/98 [00:04<00:00, 24.04it/s, disc_loss=0.379, gen_loss=2.1] Training Epoch 449/501: 100%|██████████| 98/98 [00:03<00:00, 31.31it/s, gen_loss=0.632] Training Epoch 450/501: 100%|██████████| 98/98 [00:02<00:00, 46.94it/s, gen_loss=0.0657] Training Epoch 451/501: 100%|██████████| 98/98 [00:02<00:00, 34.44it/s, disc_loss=0.443, gen_loss=1.47]
Training Epoch 452/501: 100%|██████████| 98/98 [00:03<00:00, 31.09it/s, disc_loss=0.464, gen_loss=1.29] Training Epoch 453/501: 100%|██████████| 98/98 [00:03<00:00, 26.21it/s, disc_loss=0.359, gen_loss=1.87] Training Epoch 454/501: 100%|██████████| 98/98 [00:03<00:00, 28.07it/s, gen_loss=0.343] Training Epoch 455/501: 100%|██████████| 98/98 [00:02<00:00, 40.07it/s, gen_loss=0.144] Training Epoch 456/501: 100%|██████████| 98/98 [00:01<00:00, 49.45it/s, disc_loss=0.412, gen_loss=1.94] Training Epoch 457/501: 100%|██████████| 98/98 [00:03<00:00, 25.82it/s, disc_loss=0.362, gen_loss=2.28] Training Epoch 458/501: 100%|██████████| 98/98 [00:03<00:00, 32.25it/s, disc_loss=0.395, gen_loss=2.22] Training Epoch 459/501: 100%|██████████| 98/98 [00:03<00:00, 26.19it/s, disc_loss=0.4, gen_loss=1.84] Training Epoch 460/501: 100%|██████████| 98/98 [00:02<00:00, 34.48it/s, gen_loss=0.692] Training Epoch 461/501: 100%|██████████| 98/98 [00:01<00:00, 51.45it/s, gen_loss=0.0738] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 60.664161682128906, KID: 0.041735634207725525
Training Epoch 462/501: 100%|██████████| 98/98 [00:03<00:00, 25.53it/s, disc_loss=0.378, gen_loss=2.33] Training Epoch 463/501: 100%|██████████| 98/98 [00:03<00:00, 26.30it/s, disc_loss=0.366, gen_loss=2.32] Training Epoch 464/501: 100%|██████████| 98/98 [00:03<00:00, 25.24it/s, disc_loss=0.366, gen_loss=2.07] Training Epoch 465/501: 100%|██████████| 98/98 [00:03<00:00, 32.38it/s, gen_loss=0.379] Training Epoch 466/501: 100%|██████████| 98/98 [00:02<00:00, 44.26it/s, gen_loss=0.0705] Training Epoch 467/501: 100%|██████████| 98/98 [00:03<00:00, 32.39it/s, disc_loss=0.376, gen_loss=1.99] Training Epoch 468/501: 100%|██████████| 98/98 [00:03<00:00, 30.32it/s, disc_loss=0.379, gen_loss=2.15] Training Epoch 469/501: 100%|██████████| 98/98 [00:03<00:00, 28.29it/s, disc_loss=0.366, gen_loss=2.49] Training Epoch 470/501: 100%|██████████| 98/98 [00:03<00:00, 28.38it/s, disc_loss=0.386, gen_loss=2.22] Training Epoch 471/501: 100%|██████████| 98/98 [00:02<00:00, 47.55it/s, gen_loss=0.122] Training Epoch 472/501: 100%|██████████| 98/98 [00:02<00:00, 36.63it/s, disc_loss=0.38, gen_loss=2.03] Training Epoch 473/501: 100%|██████████| 98/98 [00:03<00:00, 27.21it/s, gen_loss=1.38] Training Epoch 474/501: 100%|██████████| 98/98 [00:03<00:00, 31.53it/s, disc_loss=0.385, gen_loss=1.9] Training Epoch 475/501: 100%|██████████| 98/98 [00:03<00:00, 26.33it/s, disc_loss=0.42, gen_loss=2.33] Training Epoch 476/501: 100%|██████████| 98/98 [00:02<00:00, 38.04it/s, gen_loss=0.245] Training Epoch 477/501: 100%|██████████| 98/98 [00:01<00:00, 51.35it/s, disc_loss=1.71, gen_loss=3.73] Training Epoch 478/501: 100%|██████████| 98/98 [00:03<00:00, 28.54it/s, disc_loss=0.374, gen_loss=2.11] Training Epoch 479/501: 100%|██████████| 98/98 [00:03<00:00, 30.43it/s, disc_loss=0.38, gen_loss=1.99] Training Epoch 480/501: 100%|██████████| 98/98 [00:03<00:00, 26.96it/s, disc_loss=0.368, gen_loss=1.78] Training Epoch 481/501: 100%|██████████| 98/98 [00:03<00:00, 31.23it/s, disc_loss=0.425, gen_loss=2.32] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 60.9560546875, KID: 0.04388771951198578
Training Epoch 482/501: 100%|██████████| 98/98 [00:02<00:00, 44.47it/s, gen_loss=0.0655] Training Epoch 483/501: 100%|██████████| 98/98 [00:03<00:00, 31.20it/s, disc_loss=0.386, gen_loss=2.28] Training Epoch 484/501: 100%|██████████| 98/98 [00:03<00:00, 30.85it/s, disc_loss=0.375, gen_loss=2.18] Training Epoch 485/501: 100%|██████████| 98/98 [00:03<00:00, 26.69it/s, disc_loss=0.373, gen_loss=2.61] Training Epoch 486/501: 100%|██████████| 98/98 [00:03<00:00, 29.31it/s, gen_loss=2.15] Training Epoch 487/501: 100%|██████████| 98/98 [00:02<00:00, 47.17it/s, gen_loss=0.135] Training Epoch 488/501: 100%|██████████| 98/98 [00:02<00:00, 36.67it/s, disc_loss=0.363, gen_loss=2.19] Training Epoch 489/501: 100%|██████████| 98/98 [00:03<00:00, 28.27it/s, gen_loss=0.375] Training Epoch 490/501: 100%|██████████| 98/98 [00:03<00:00, 26.82it/s, disc_loss=0.4, gen_loss=1.88] Training Epoch 491/501: 100%|██████████| 98/98 [00:03<00:00, 28.03it/s, gen_loss=0.319] Training Epoch 492/501: 100%|██████████| 98/98 [00:02<00:00, 35.87it/s, gen_loss=0.315] Training Epoch 493/501: 100%|██████████| 98/98 [00:02<00:00, 47.32it/s, disc_loss=0.374, gen_loss=2.37] Training Epoch 494/501: 100%|██████████| 98/98 [00:03<00:00, 26.52it/s, disc_loss=0.38, gen_loss=2.09] Training Epoch 495/501: 100%|██████████| 98/98 [00:03<00:00, 30.88it/s, disc_loss=0.381, gen_loss=1.74] Training Epoch 496/501: 100%|██████████| 98/98 [00:03<00:00, 26.96it/s, disc_loss=0.405, gen_loss=1.82] Training Epoch 497/501: 100%|██████████| 98/98 [00:02<00:00, 34.90it/s, disc_loss=0.381, gen_loss=2.06] Training Epoch 498/501: 100%|██████████| 98/98 [00:01<00:00, 54.89it/s, gen_loss=0.084] Training Epoch 499/501: 100%|██████████| 98/98 [00:03<00:00, 31.57it/s, disc_loss=0.387, gen_loss=2.01] Training Epoch 500/501: 100%|██████████| 98/98 [00:03<00:00, 29.16it/s, disc_loss=0.367, gen_loss=2.25] Training Epoch 501/501: 100%|██████████| 98/98 [00:03<00:00, 25.49it/s, disc_loss=0.41, gen_loss=2.61] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 49.661376953125, KID: 0.03680025413632393
<Figure size 1200x600 with 0 Axes>
Observations:¶
- Images definitely look more realistic. The image for a ship and horse look good. Images also seem more diverse unlike before.
- Until around 200 spoehcs, both KID and FID decrease consistently and FID touches a new minimum. But soon after FID begins to dteriorate while KID continues to slowly decrease.
- While scores are stable, losses seem to oscillate alot but only within a certain range. This could indicate that the discriminator and generator are taking turns to fool each other which is a good thing.
Enhancing Conditioning¶
One problem, we constantly observed during training was issues with conditioning the models with the right labels. The model didn't seem to take into account the labels and generated something unrelated. To combat this we will try a variety of approaches and compare them.
Auxillary Classifer¶
The ACGAN is an extension of the traditional DCGAN (Deep Convolutional Generative Adversarial Network) architecture, introducing an auxiliary classifier to enhance its capabilities.

Dual Classification Tasks:¶
- ACGAN incorporates two classification tasks within the discriminator.
- The first task involves binary classification, distinguishing between real and generated images.
- The second task focuses on multi-class classification, determining the class labels of the generated images.
Auxiliary Classifier:¶
- A dedicated auxiliary classifier is integrated into the discriminator to perform the additional classification task.
- This auxiliary classifier enhances the discriminator's ability to not only discern the authenticity of images but also predict their class labels leading to better diversity of images
Combined Objective:¶
- The generator and discriminator jointly optimize a combined objective that includes both adversarial loss (real vs. fake) and auxiliary classification loss.
- This joint optimization encourages the generator to produce not only realistic but also class-specific images.
class ACDiscriminator(nn.Module):
def __init__(self):
super(ACDiscriminator, self).__init__()
#self.embed = nn.Embedding(NUM_CLASS,128)
self.conv_layers = nn.Sequential(
nn.utils.spectral_norm(nn.Conv2d(CHANNELS, 32, kernel_size=4, stride=2, padding=1)),
nn.BatchNorm2d(32),
nn.LeakyReLU(0.1, inplace=True),
nn.utils.spectral_norm(nn.Conv2d(32, 64, kernel_size=4, stride=2, padding=1)),
nn.BatchNorm2d(64),
nn.LeakyReLU(0.1, inplace=True),
nn.utils.spectral_norm(nn.Conv2d(64, 128, kernel_size=4, stride=2, padding=1)),
nn.BatchNorm2d(128),
nn.LeakyReLU(0.1, inplace=True),
nn.utils.spectral_norm(nn.Conv2d(128, 256, kernel_size=4, stride=2, padding=1)),
nn.BatchNorm2d(256),
nn.LeakyReLU(0.1, inplace=True),
nn.AvgPool2d(2, stride=2)
)
self.output_layers = nn.Sequential(
nn.Linear(256, 512),
nn.LeakyReLU(0.1, inplace=True),
nn.Linear(512, 1),
nn.Sigmoid()
)
self.classifier = nn.Sequential(
nn.Linear(256, 512),
nn.LeakyReLU(0.1, inplace=True),
nn.Linear(512, NUM_CLASS),
nn.Softmax()
)
def forward(self, x,label=None):
#labels = self.embed(torch.argmax(labels,axis=1))
output = self.conv_layers(x).squeeze()
#x = torch.cat((output,labels), dim=1)
f = self.output_layers(output)
c = self.classifier(output)
return f,c
class ACGAN(BalancedGAN):
def __init__(self, generator, discriminator, train_loader):
super().__init__(generator, discriminator, train_loader)
def disc_step(self,img,label):
self.d_opt.zero_grad()
img = img.to(device)
label = label.to(device)
img.requires_grad = True
# Discriminator Loss
noise = torch.normal(0, 1, (img.size()[0], self.generator.latent_dim), device=device)
fake_imgs = self.generator(noise, label)
fake_pred, label_pred_fake = self.discriminator(fake_imgs)
real_pred, label_pred_real = self.discriminator(img)
fake_label = smooth_labels(torch.zeros((img.size()[0], 1), device=device))
real_label = smooth_labels(torch.ones((img.size()[0], 1), device=device))
r1_loss = self.r1_loss(real_pred, img)
d_loss = (self.loss(fake_pred, fake_label) + self.loss(real_pred, real_label)) / 2
d_loss = d_loss + r1_loss
d_loss.backward()
# Classifer Loss
noise = torch.normal(0, 1, (img.size()[0], self.generator.latent_dim), device=device)
fake_imgs = self.generator(noise, label)
fake_pred, label_pred_fake = self.discriminator(fake_imgs)
real_pred, label_pred_real = self.discriminator(img)
aux_loss_fake = nn.CrossEntropyLoss()(label_pred_fake, label.float())
aux_loss_real = nn.CrossEntropyLoss()(label_pred_real, label.float())
aux_loss = (aux_loss_fake + aux_loss_real) / 2
aux_loss.backward()
self.d_opt.step()
return d_loss.cpu().item()
def gen_step(self,img,label):
self.g_opt.zero_grad()
img = img.to(device)
label = label.to(device)
noise = torch.normal(0, 1, (img.size()[0], self.generator.latent_dim), device=device)
fake_imgs = self.generator(noise, label)
fake_pred,_ = self.discriminator(fake_imgs, label)
real_label = torch.ones((img.size()[0], 1), device=device)
g_loss = self.loss(fake_pred, real_label)
g_loss.backward()
self.g_opt.step()
return g_loss.cpu().item()
def get_precision(self, img, label):
with torch.no_grad():
img = img.to(device)
label = label.to(device)
fake_label = smooth_labels(torch.zeros((img.size()[0], 1), device=device))
real_label = smooth_labels(torch.ones((img.size()[0], 1), device=device))
noise = torch.normal(0, 1, (img.size()[0], self.generator.latent_dim), device=device)
fake_imgs = self.generator(noise, label)
fake_pred,_ = self.discriminator(fake_imgs, label)
real_pred,_ = self.discriminator(img, label)
tp_r = ((fake_pred <= 0.5) & (fake_label <= 0.5)).sum().item()
tp_f = ((real_pred <= 0.5) & (real_label <= 0.5)).sum().item()
tp = tp_r + tp_f
fp_r = ((fake_pred >= 0.5) & (fake_label <= 0.5)).sum().item()
fp_f = ((real_pred >= 0.5) & (real_label <= 0.5)).sum().item()
fp = fp_r + fp_f
precision = tp / max((tp + fp), 1e-9) # Avoid division by zero
return precision
gen_imp = ResizeGenerator(128,1024).to(device)
ac_spectral = ACDiscriminator().to(device)
acgan = ACGAN(gen_imp,ac_spectral,train_loader)
acgan.fit(501,train_loader)
plot_losses(501,[ (balancedgan,"Balanced"),(sngan,"Spectral"),
(acgan,"ACGAN")])
acgan.save("acgan-500e")
torch.cuda.empty_cache()
Training ACGAN for 501 Epochs
Training Epoch 1/501: 100%|██████████| 98/98 [00:05<00:00, 18.53it/s, disc_loss=0.575, gen_loss=1.42]
100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 121.42070770263672, KID: 0.12733101844787598
Training Epoch 2/501: 100%|██████████| 98/98 [00:05<00:00, 19.38it/s, disc_loss=0.595, gen_loss=1.43] Training Epoch 3/501: 100%|██████████| 98/98 [00:04<00:00, 20.02it/s, disc_loss=0.584, gen_loss=1.24] Training Epoch 4/501: 100%|██████████| 98/98 [00:04<00:00, 19.99it/s, disc_loss=0.527, gen_loss=1.35] Training Epoch 5/501: 100%|██████████| 98/98 [00:05<00:00, 18.97it/s, disc_loss=0.529, gen_loss=1.51] Training Epoch 6/501: 100%|██████████| 98/98 [00:05<00:00, 19.24it/s, disc_loss=0.487, gen_loss=2.35] Training Epoch 7/501: 100%|██████████| 98/98 [00:01<00:00, 51.15it/s, gen_loss=0.0259] Training Epoch 8/501: 100%|██████████| 98/98 [00:03<00:00, 25.60it/s, disc_loss=0.585, gen_loss=1.42] Training Epoch 9/501: 100%|██████████| 98/98 [00:03<00:00, 26.14it/s, gen_loss=1.23] Training Epoch 10/501: 100%|██████████| 98/98 [00:04<00:00, 23.32it/s, disc_loss=0.723, gen_loss=1.35] Training Epoch 11/501: 100%|██████████| 98/98 [00:04<00:00, 20.26it/s, disc_loss=0.522, gen_loss=1.41] Training Epoch 12/501: 100%|██████████| 98/98 [00:02<00:00, 35.03it/s, gen_loss=0.0399] Training Epoch 13/501: 100%|██████████| 98/98 [00:03<00:00, 28.16it/s, gen_loss=1.12] Training Epoch 14/501: 100%|██████████| 98/98 [00:04<00:00, 23.76it/s, disc_loss=0.456, gen_loss=1.42] Training Epoch 15/501: 100%|██████████| 98/98 [00:04<00:00, 21.85it/s, disc_loss=0.467, gen_loss=1.53] Training Epoch 16/501: 100%|██████████| 98/98 [00:04<00:00, 24.23it/s, disc_loss=0.449, gen_loss=1.78] Training Epoch 17/501: 100%|██████████| 98/98 [00:03<00:00, 27.92it/s, gen_loss=0.104] Training Epoch 18/501: 100%|██████████| 98/98 [00:02<00:00, 35.46it/s, gen_loss=1.08] Training Epoch 19/501: 100%|██████████| 98/98 [00:03<00:00, 27.58it/s, disc_loss=0.759, gen_loss=1.87] Training Epoch 20/501: 100%|██████████| 98/98 [00:04<00:00, 21.79it/s, disc_loss=0.447, gen_loss=1.98] Training Epoch 21/501: 100%|██████████| 98/98 [00:04<00:00, 24.29it/s, gen_loss=1.28] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 74.92557525634766, KID: 0.07161511480808258
Training Epoch 22/501: 100%|██████████| 98/98 [00:04<00:00, 24.19it/s, gen_loss=0.113] Training Epoch 23/501: 100%|██████████| 98/98 [00:02<00:00, 47.89it/s, disc_loss=0.47, gen_loss=1.44] Training Epoch 24/501: 100%|██████████| 98/98 [00:04<00:00, 23.00it/s, gen_loss=1.14] Training Epoch 25/501: 100%|██████████| 98/98 [00:04<00:00, 22.95it/s, disc_loss=0.739, gen_loss=3.35] Training Epoch 26/501: 100%|██████████| 98/98 [00:04<00:00, 21.89it/s, disc_loss=0.442, gen_loss=1.58] Training Epoch 27/501: 100%|██████████| 98/98 [00:04<00:00, 21.87it/s, disc_loss=0.442, gen_loss=1.72] Training Epoch 28/501: 100%|██████████| 98/98 [00:02<00:00, 46.94it/s, gen_loss=0.039] Training Epoch 29/501: 100%|██████████| 98/98 [00:03<00:00, 27.13it/s, disc_loss=0.443, gen_loss=1.42] Training Epoch 30/501: 100%|██████████| 98/98 [00:04<00:00, 23.18it/s, disc_loss=0.504, gen_loss=1.99] Training Epoch 31/501: 100%|██████████| 98/98 [00:03<00:00, 25.71it/s, gen_loss=0.915] Training Epoch 32/501: 100%|██████████| 98/98 [00:03<00:00, 25.40it/s, gen_loss=1.83] Training Epoch 33/501: 100%|██████████| 98/98 [00:02<00:00, 35.77it/s, gen_loss=0.0527] Training Epoch 34/501: 100%|██████████| 98/98 [00:03<00:00, 31.74it/s, disc_loss=0.477, gen_loss=1.75] Training Epoch 35/501: 100%|██████████| 98/98 [00:04<00:00, 23.95it/s, disc_loss=0.494, gen_loss=1.97] Training Epoch 36/501: 100%|██████████| 98/98 [00:04<00:00, 24.04it/s, disc_loss=0.472, gen_loss=1.6] Training Epoch 37/501: 100%|██████████| 98/98 [00:03<00:00, 26.37it/s, disc_loss=0.409, gen_loss=2.22] Training Epoch 38/501: 100%|██████████| 98/98 [00:03<00:00, 26.08it/s, gen_loss=0.079] Training Epoch 39/501: 100%|██████████| 98/98 [00:02<00:00, 35.71it/s, disc_loss=0.416, gen_loss=2.05] Training Epoch 40/501: 100%|██████████| 98/98 [00:04<00:00, 24.26it/s, gen_loss=0.705] Training Epoch 41/501: 100%|██████████| 98/98 [00:04<00:00, 23.35it/s, gen_loss=1.09] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 80.34097290039062, KID: 0.07161049544811249
Training Epoch 42/501: 100%|██████████| 98/98 [00:03<00:00, 24.86it/s, disc_loss=0.579, gen_loss=2.95] Training Epoch 43/501: 100%|██████████| 98/98 [00:04<00:00, 24.20it/s, gen_loss=0.222] Training Epoch 44/501: 100%|██████████| 98/98 [00:02<00:00, 48.51it/s, gen_loss=2.07] Training Epoch 45/501: 100%|██████████| 98/98 [00:03<00:00, 25.01it/s, gen_loss=1.35] Training Epoch 46/501: 100%|██████████| 98/98 [00:04<00:00, 23.78it/s, gen_loss=1.38] Training Epoch 47/501: 100%|██████████| 98/98 [00:03<00:00, 26.01it/s, disc_loss=0.813, gen_loss=2.43] Training Epoch 48/501: 100%|██████████| 98/98 [00:04<00:00, 21.96it/s, disc_loss=0.416, gen_loss=1.86] Training Epoch 49/501: 100%|██████████| 98/98 [00:02<00:00, 46.88it/s, gen_loss=0.0199] Training Epoch 50/501: 100%|██████████| 98/98 [00:04<00:00, 23.75it/s, disc_loss=0.424, gen_loss=1.84] Training Epoch 51/501: 100%|██████████| 98/98 [00:03<00:00, 25.14it/s, disc_loss=0.528, gen_loss=1.07]
Training Epoch 52/501: 100%|██████████| 98/98 [00:03<00:00, 25.11it/s, gen_loss=0.749] Training Epoch 53/501: 100%|██████████| 98/98 [00:04<00:00, 22.34it/s, disc_loss=0.406, gen_loss=1.96] Training Epoch 54/501: 100%|██████████| 98/98 [00:03<00:00, 32.32it/s, gen_loss=0.0748] Training Epoch 55/501: 100%|██████████| 98/98 [00:03<00:00, 28.55it/s, gen_loss=1.34] Training Epoch 56/501: 100%|██████████| 98/98 [00:04<00:00, 22.38it/s, disc_loss=0.517, gen_loss=1.39] Training Epoch 57/501: 100%|██████████| 98/98 [00:03<00:00, 25.46it/s, disc_loss=0.436, gen_loss=2.35] Training Epoch 58/501: 100%|██████████| 98/98 [00:03<00:00, 24.66it/s, disc_loss=0.487, gen_loss=1.99] Training Epoch 59/501: 100%|██████████| 98/98 [00:03<00:00, 29.53it/s, gen_loss=0.1] Training Epoch 60/501: 100%|██████████| 98/98 [00:02<00:00, 37.11it/s, disc_loss=0.39, gen_loss=2.19] Training Epoch 61/501: 100%|██████████| 98/98 [00:03<00:00, 24.95it/s, disc_loss=0.428, gen_loss=2.07] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 71.2068099975586, KID: 0.06732255965471268
Training Epoch 62/501: 100%|██████████| 98/98 [00:04<00:00, 22.82it/s, gen_loss=1.39] Training Epoch 63/501: 100%|██████████| 98/98 [00:04<00:00, 21.50it/s, disc_loss=0.412, gen_loss=1.63] Training Epoch 64/501: 100%|██████████| 98/98 [00:04<00:00, 22.54it/s, gen_loss=0.232] Training Epoch 65/501: 100%|██████████| 98/98 [00:01<00:00, 49.49it/s, gen_loss=2.38] Training Epoch 66/501: 100%|██████████| 98/98 [00:03<00:00, 26.18it/s, disc_loss=0.424, gen_loss=1.74] Training Epoch 67/501: 100%|██████████| 98/98 [00:03<00:00, 24.74it/s, disc_loss=0.415, gen_loss=1.98] Training Epoch 68/501: 100%|██████████| 98/98 [00:03<00:00, 26.21it/s, gen_loss=2.22] Training Epoch 69/501: 100%|██████████| 98/98 [00:04<00:00, 23.06it/s, disc_loss=0.427, gen_loss=1.54] Training Epoch 70/501: 100%|██████████| 98/98 [00:02<00:00, 39.96it/s, gen_loss=0.0361] Training Epoch 71/501: 100%|██████████| 98/98 [00:03<00:00, 27.65it/s, gen_loss=1.56] Training Epoch 72/501: 100%|██████████| 98/98 [00:03<00:00, 26.48it/s, gen_loss=0.883] Training Epoch 73/501: 100%|██████████| 98/98 [00:03<00:00, 28.28it/s, gen_loss=1.27] Training Epoch 74/501: 100%|██████████| 98/98 [00:03<00:00, 29.08it/s, disc_loss=0.431, gen_loss=1.75] Training Epoch 75/501: 100%|██████████| 98/98 [00:03<00:00, 32.11it/s, gen_loss=0.047] Training Epoch 76/501: 100%|██████████| 98/98 [00:02<00:00, 32.81it/s, disc_loss=0.405, gen_loss=2.16] Training Epoch 77/501: 100%|██████████| 98/98 [00:03<00:00, 27.32it/s, disc_loss=0.58, gen_loss=2.08] Training Epoch 78/501: 100%|██████████| 98/98 [00:04<00:00, 22.97it/s, disc_loss=0.385, gen_loss=2.19] Training Epoch 79/501: 100%|██████████| 98/98 [00:03<00:00, 24.84it/s, disc_loss=0.407, gen_loss=1.95] Training Epoch 80/501: 100%|██████████| 98/98 [00:03<00:00, 26.82it/s, gen_loss=0.099] Training Epoch 81/501: 100%|██████████| 98/98 [00:02<00:00, 43.87it/s, disc_loss=0.436, gen_loss=1.57] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 65.24176788330078, KID: 0.06095496937632561
Training Epoch 82/501: 100%|██████████| 98/98 [00:03<00:00, 25.78it/s, disc_loss=0.418, gen_loss=1.79] Training Epoch 83/501: 100%|██████████| 98/98 [00:03<00:00, 25.61it/s, disc_loss=0.515, gen_loss=2.65] Training Epoch 84/501: 100%|██████████| 98/98 [00:03<00:00, 25.17it/s, disc_loss=0.391, gen_loss=1.55] Training Epoch 85/501: 100%|██████████| 98/98 [00:04<00:00, 21.92it/s, gen_loss=0.668] Training Epoch 86/501: 100%|██████████| 98/98 [00:02<00:00, 44.58it/s, gen_loss=0.0588] Training Epoch 87/501: 100%|██████████| 98/98 [00:03<00:00, 25.84it/s, disc_loss=0.385, gen_loss=2.08] Training Epoch 88/501: 100%|██████████| 98/98 [00:04<00:00, 23.61it/s, disc_loss=0.582, gen_loss=3] Training Epoch 89/501: 100%|██████████| 98/98 [00:03<00:00, 26.12it/s, disc_loss=0.515, gen_loss=2.42] Training Epoch 90/501: 100%|██████████| 98/98 [00:04<00:00, 24.06it/s, disc_loss=0.389, gen_loss=1.8] Training Epoch 91/501: 100%|██████████| 98/98 [00:02<00:00, 35.79it/s, gen_loss=0.0276] Training Epoch 92/501: 100%|██████████| 98/98 [00:03<00:00, 27.51it/s, disc_loss=0.522, gen_loss=2.33] Training Epoch 93/501: 100%|██████████| 98/98 [00:04<00:00, 24.26it/s, disc_loss=0.362, gen_loss=2.21] Training Epoch 94/501: 100%|██████████| 98/98 [00:04<00:00, 24.01it/s, disc_loss=0.389, gen_loss=1.88] Training Epoch 95/501: 100%|██████████| 98/98 [00:04<00:00, 23.47it/s, gen_loss=0.306] Training Epoch 96/501: 100%|██████████| 98/98 [00:03<00:00, 29.80it/s, gen_loss=0.053] Training Epoch 97/501: 100%|██████████| 98/98 [00:02<00:00, 40.05it/s, gen_loss=1.2] Training Epoch 98/501: 100%|██████████| 98/98 [00:04<00:00, 22.40it/s, gen_loss=1.87] Training Epoch 99/501: 100%|██████████| 98/98 [00:03<00:00, 25.03it/s, disc_loss=0.423, gen_loss=2.38] Training Epoch 100/501: 100%|██████████| 98/98 [00:04<00:00, 22.64it/s, disc_loss=0.401, gen_loss=2.25] Training Epoch 101/501: 100%|██████████| 98/98 [00:03<00:00, 25.18it/s, gen_loss=0.175] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 51.00214767456055, KID: 0.04183284565806389
Training Epoch 102/501: 100%|██████████| 98/98 [00:02<00:00, 41.47it/s, disc_loss=0.385, gen_loss=2.21] Training Epoch 103/501: 100%|██████████| 98/98 [00:03<00:00, 30.03it/s, gen_loss=1.39] Training Epoch 104/501: 100%|██████████| 98/98 [00:03<00:00, 29.03it/s, gen_loss=0.526] Training Epoch 105/501: 100%|██████████| 98/98 [00:03<00:00, 30.85it/s, disc_loss=0.961, gen_loss=4.07] Training Epoch 106/501: 100%|██████████| 98/98 [00:04<00:00, 24.00it/s, disc_loss=0.372, gen_loss=2.48] Training Epoch 107/501: 100%|██████████| 98/98 [00:01<00:00, 49.17it/s, gen_loss=0.0607] Training Epoch 108/501: 100%|██████████| 98/98 [00:03<00:00, 24.98it/s, gen_loss=2.98] Training Epoch 109/501: 100%|██████████| 98/98 [00:04<00:00, 23.33it/s, disc_loss=0.417, gen_loss=2.02] Training Epoch 110/501: 100%|██████████| 98/98 [00:03<00:00, 25.63it/s, disc_loss=0.753, gen_loss=2.2] Training Epoch 111/501: 100%|██████████| 98/98 [00:04<00:00, 22.46it/s, disc_loss=0.393, gen_loss=1.85] Training Epoch 112/501: 100%|██████████| 98/98 [00:02<00:00, 33.24it/s, gen_loss=0.0629] Training Epoch 113/501: 100%|██████████| 98/98 [00:03<00:00, 26.64it/s, disc_loss=0.415, gen_loss=1.98] Training Epoch 114/501: 100%|██████████| 98/98 [00:03<00:00, 25.33it/s, disc_loss=0.546, gen_loss=2.31] Training Epoch 115/501: 100%|██████████| 98/98 [00:04<00:00, 22.73it/s, disc_loss=0.556, gen_loss=2.21] Training Epoch 116/501: 100%|██████████| 98/98 [00:04<00:00, 24.50it/s, gen_loss=1.64] Training Epoch 117/501: 100%|██████████| 98/98 [00:02<00:00, 32.89it/s, gen_loss=0.24] Training Epoch 118/501: 100%|██████████| 98/98 [00:02<00:00, 34.60it/s, disc_loss=0.354, gen_loss=2.28] Training Epoch 119/501: 100%|██████████| 98/98 [00:03<00:00, 32.26it/s, disc_loss=0.352, gen_loss=2.25] Training Epoch 120/501: 100%|██████████| 98/98 [00:04<00:00, 23.69it/s, disc_loss=0.376, gen_loss=1.88] Training Epoch 121/501: 100%|██████████| 98/98 [00:04<00:00, 22.52it/s, disc_loss=0.398, gen_loss=2.4] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 68.24649047851562, KID: 0.062145814299583435
Training Epoch 122/501: 100%|██████████| 98/98 [00:04<00:00, 22.71it/s, gen_loss=0.486] Training Epoch 123/501: 100%|██████████| 98/98 [00:02<00:00, 48.44it/s, gen_loss=0.0345] Training Epoch 124/501: 100%|██████████| 98/98 [00:04<00:00, 23.80it/s, gen_loss=0.639] Training Epoch 125/501: 100%|██████████| 98/98 [00:03<00:00, 25.08it/s, disc_loss=0.38, gen_loss=2.02] Training Epoch 126/501: 100%|██████████| 98/98 [00:04<00:00, 20.44it/s, disc_loss=0.378, gen_loss=2.14] Training Epoch 127/501: 100%|██████████| 98/98 [00:04<00:00, 23.00it/s, disc_loss=0.419, gen_loss=1.53] Training Epoch 128/501: 100%|██████████| 98/98 [00:02<00:00, 41.30it/s, gen_loss=0.0654] Training Epoch 129/501: 100%|██████████| 98/98 [00:04<00:00, 23.61it/s, disc_loss=0.4, gen_loss=2.2] Training Epoch 130/501: 100%|██████████| 98/98 [00:03<00:00, 24.56it/s, gen_loss=1.56] Training Epoch 131/501: 100%|██████████| 98/98 [00:04<00:00, 23.00it/s, gen_loss=1.19] Training Epoch 132/501: 100%|██████████| 98/98 [00:03<00:00, 26.16it/s, gen_loss=0.356] Training Epoch 133/501: 100%|██████████| 98/98 [00:03<00:00, 29.78it/s, gen_loss=0.0713] Training Epoch 134/501: 100%|██████████| 98/98 [00:03<00:00, 29.29it/s, disc_loss=0.387, gen_loss=2.41] Training Epoch 135/501: 100%|██████████| 98/98 [00:04<00:00, 21.29it/s, gen_loss=1.97] Training Epoch 136/501: 100%|██████████| 98/98 [00:03<00:00, 25.80it/s, disc_loss=0.388, gen_loss=1.75] Training Epoch 137/501: 100%|██████████| 98/98 [00:04<00:00, 23.12it/s, disc_loss=0.393, gen_loss=2.29] Training Epoch 138/501: 100%|██████████| 98/98 [00:03<00:00, 26.85it/s, gen_loss=0.2] Training Epoch 139/501: 100%|██████████| 98/98 [00:02<00:00, 40.21it/s, disc_loss=0.432, gen_loss=1.81] Training Epoch 140/501: 100%|██████████| 98/98 [00:04<00:00, 21.36it/s, disc_loss=0.394, gen_loss=2.42] Training Epoch 141/501: 100%|██████████| 98/98 [00:04<00:00, 22.78it/s, disc_loss=0.879, gen_loss=2.52] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 55.46623229980469, KID: 0.053866248577833176
Training Epoch 142/501: 100%|██████████| 98/98 [00:04<00:00, 21.67it/s, gen_loss=1.16] Training Epoch 143/501: 100%|██████████| 98/98 [00:04<00:00, 21.82it/s, disc_loss=0.445, gen_loss=2.29] Training Epoch 144/501: 100%|██████████| 98/98 [00:02<00:00, 47.32it/s, gen_loss=0.0492] Training Epoch 145/501: 100%|██████████| 98/98 [00:04<00:00, 21.34it/s, gen_loss=1.57] Training Epoch 146/501: 100%|██████████| 98/98 [00:03<00:00, 26.62it/s, disc_loss=0.371, gen_loss=2.27] Training Epoch 147/501: 100%|██████████| 98/98 [00:04<00:00, 22.97it/s, disc_loss=0.41, gen_loss=2.7] Training Epoch 148/501: 100%|██████████| 98/98 [00:04<00:00, 22.11it/s, disc_loss=0.402, gen_loss=1.79] Training Epoch 149/501: 100%|██████████| 98/98 [00:02<00:00, 33.07it/s, gen_loss=0.0939] Training Epoch 150/501: 100%|██████████| 98/98 [00:03<00:00, 28.10it/s, disc_loss=0.396, gen_loss=1.97] Training Epoch 151/501: 100%|██████████| 98/98 [00:04<00:00, 22.85it/s, disc_loss=0.698, gen_loss=3.42]
Training Epoch 152/501: 100%|██████████| 98/98 [00:04<00:00, 22.55it/s, disc_loss=0.385, gen_loss=1.6] Training Epoch 153/501: 100%|██████████| 98/98 [00:04<00:00, 22.40it/s, gen_loss=0.571] Training Epoch 154/501: 100%|██████████| 98/98 [00:03<00:00, 28.92it/s, gen_loss=0.104] Training Epoch 155/501: 100%|██████████| 98/98 [00:02<00:00, 37.14it/s, disc_loss=0.383, gen_loss=1.81] Training Epoch 156/501: 100%|██████████| 98/98 [00:04<00:00, 20.81it/s, disc_loss=0.38, gen_loss=1.93] Training Epoch 157/501: 100%|██████████| 98/98 [00:03<00:00, 28.04it/s, disc_loss=0.353, gen_loss=2.26] Training Epoch 158/501: 100%|██████████| 98/98 [00:03<00:00, 26.31it/s, disc_loss=0.386, gen_loss=2.35] Training Epoch 159/501: 100%|██████████| 98/98 [00:03<00:00, 24.73it/s, gen_loss=0.353] Training Epoch 160/501: 100%|██████████| 98/98 [00:01<00:00, 53.78it/s, gen_loss=0.0464] Training Epoch 161/501: 100%|██████████| 98/98 [00:04<00:00, 23.21it/s, disc_loss=0.46, gen_loss=0.964] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 48.67381286621094, KID: 0.04115245118737221
Training Epoch 162/501: 100%|██████████| 98/98 [00:04<00:00, 20.79it/s, gen_loss=0.639] Training Epoch 163/501: 100%|██████████| 98/98 [00:04<00:00, 23.61it/s, disc_loss=0.389, gen_loss=1.89] Training Epoch 164/501: 100%|██████████| 98/98 [00:04<00:00, 22.24it/s, disc_loss=0.394, gen_loss=2.04] Training Epoch 165/501: 100%|██████████| 98/98 [00:02<00:00, 42.61it/s, gen_loss=0.0438] Training Epoch 166/501: 100%|██████████| 98/98 [00:04<00:00, 23.91it/s, disc_loss=0.414, gen_loss=1.95] Training Epoch 167/501: 100%|██████████| 98/98 [00:04<00:00, 23.16it/s, disc_loss=0.419, gen_loss=2.27] Training Epoch 168/501: 100%|██████████| 98/98 [00:03<00:00, 25.81it/s, gen_loss=1.07] Training Epoch 169/501: 100%|██████████| 98/98 [00:03<00:00, 26.17it/s, disc_loss=0.562, gen_loss=1] Training Epoch 170/501: 100%|██████████| 98/98 [00:03<00:00, 32.57it/s, gen_loss=0.0736] Training Epoch 171/501: 100%|██████████| 98/98 [00:03<00:00, 31.48it/s, disc_loss=0.389, gen_loss=2.11] Training Epoch 172/501: 100%|██████████| 98/98 [00:05<00:00, 19.37it/s, gen_loss=1.64] Training Epoch 173/501: 100%|██████████| 98/98 [00:04<00:00, 23.43it/s, disc_loss=0.375, gen_loss=2.48] Training Epoch 174/501: 100%|██████████| 98/98 [00:04<00:00, 20.24it/s, disc_loss=0.38, gen_loss=2.11] Training Epoch 175/501: 100%|██████████| 98/98 [00:03<00:00, 26.90it/s, gen_loss=0.167] Training Epoch 176/501: 100%|██████████| 98/98 [00:02<00:00, 41.05it/s, disc_loss=0.391, gen_loss=2.07] Training Epoch 177/501: 100%|██████████| 98/98 [00:04<00:00, 20.95it/s, disc_loss=0.36, gen_loss=2.28] Training Epoch 178/501: 100%|██████████| 98/98 [00:03<00:00, 28.28it/s, gen_loss=0.874] Training Epoch 179/501: 100%|██████████| 98/98 [00:03<00:00, 26.97it/s, disc_loss=0.347, gen_loss=2.36] Training Epoch 180/501: 100%|██████████| 98/98 [00:03<00:00, 30.85it/s, gen_loss=2.03] Training Epoch 181/501: 100%|██████████| 98/98 [00:01<00:00, 49.22it/s, gen_loss=0.125] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 166.98594665527344, KID: 0.18025989830493927
Training Epoch 182/501: 100%|██████████| 98/98 [00:04<00:00, 23.70it/s, gen_loss=0.891] Training Epoch 183/501: 100%|██████████| 98/98 [00:03<00:00, 25.58it/s, disc_loss=0.387, gen_loss=2.21] Training Epoch 184/501: 100%|██████████| 98/98 [00:04<00:00, 24.21it/s, gen_loss=1.74] Training Epoch 185/501: 100%|██████████| 98/98 [00:03<00:00, 26.47it/s, gen_loss=0.382] Training Epoch 186/501: 100%|██████████| 98/98 [00:03<00:00, 31.03it/s, gen_loss=0.0729] Training Epoch 187/501: 100%|██████████| 98/98 [00:03<00:00, 27.28it/s, disc_loss=0.367, gen_loss=1.96] Training Epoch 188/501: 100%|██████████| 98/98 [00:04<00:00, 21.41it/s, disc_loss=0.417, gen_loss=1.25] Training Epoch 189/501: 100%|██████████| 98/98 [00:04<00:00, 22.23it/s, disc_loss=0.399, gen_loss=1.86] Training Epoch 190/501: 100%|██████████| 98/98 [00:04<00:00, 21.60it/s, gen_loss=0.998] Training Epoch 191/501: 100%|██████████| 98/98 [00:03<00:00, 30.63it/s, gen_loss=0.239] Training Epoch 192/501: 100%|██████████| 98/98 [00:02<00:00, 40.97it/s, disc_loss=0.369, gen_loss=2.07] Training Epoch 193/501: 100%|██████████| 98/98 [00:04<00:00, 21.49it/s, disc_loss=0.347, gen_loss=2.34] Training Epoch 194/501: 100%|██████████| 98/98 [00:02<00:00, 33.57it/s, gen_loss=0.589] Training Epoch 195/501: 100%|██████████| 98/98 [00:03<00:00, 25.24it/s, gen_loss=1.29] Training Epoch 196/501: 100%|██████████| 98/98 [00:03<00:00, 28.14it/s, gen_loss=0.448] Training Epoch 197/501: 100%|██████████| 98/98 [00:01<00:00, 53.99it/s, gen_loss=0.0384] Training Epoch 198/501: 100%|██████████| 98/98 [00:04<00:00, 20.86it/s, disc_loss=0.376, gen_loss=2.23] Training Epoch 199/501: 100%|██████████| 98/98 [00:04<00:00, 19.95it/s, disc_loss=0.398, gen_loss=2.28] Training Epoch 200/501: 100%|██████████| 98/98 [00:03<00:00, 26.44it/s, disc_loss=0.49, gen_loss=1.16] Training Epoch 201/501: 100%|██████████| 98/98 [00:04<00:00, 23.00it/s, disc_loss=0.365, gen_loss=2.18] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 59.939510345458984, KID: 0.047415997833013535
Training Epoch 202/501: 100%|██████████| 98/98 [00:02<00:00, 39.81it/s, gen_loss=0.0889] Training Epoch 203/501: 100%|██████████| 98/98 [00:03<00:00, 24.71it/s, disc_loss=0.361, gen_loss=2.4] Training Epoch 204/501: 100%|██████████| 98/98 [00:03<00:00, 25.02it/s, disc_loss=0.795, gen_loss=2.27] Training Epoch 205/501: 100%|██████████| 98/98 [00:04<00:00, 23.28it/s, disc_loss=0.358, gen_loss=1.94] Training Epoch 206/501: 100%|██████████| 98/98 [00:04<00:00, 23.67it/s, gen_loss=0.596] Training Epoch 207/501: 100%|██████████| 98/98 [00:03<00:00, 31.83it/s, gen_loss=0.115] Training Epoch 208/501: 100%|██████████| 98/98 [00:03<00:00, 31.61it/s, disc_loss=0.377, gen_loss=1.97] Training Epoch 209/501: 100%|██████████| 98/98 [00:04<00:00, 21.94it/s, disc_loss=0.378, gen_loss=2.15] Training Epoch 210/501: 100%|██████████| 98/98 [00:03<00:00, 26.48it/s, disc_loss=0.391, gen_loss=1.58] Training Epoch 211/501: 100%|██████████| 98/98 [00:04<00:00, 21.62it/s, disc_loss=0.363, gen_loss=1.91] Training Epoch 212/501: 100%|██████████| 98/98 [00:02<00:00, 33.06it/s, gen_loss=0.357] Training Epoch 213/501: 100%|██████████| 98/98 [00:02<00:00, 48.34it/s, disc_loss=0.608, gen_loss=1.05] Training Epoch 214/501: 100%|██████████| 98/98 [00:04<00:00, 19.89it/s, disc_loss=0.365, gen_loss=1.89] Training Epoch 215/501: 100%|██████████| 98/98 [00:04<00:00, 22.17it/s, disc_loss=0.443, gen_loss=1.15] Training Epoch 216/501: 100%|██████████| 98/98 [00:04<00:00, 23.52it/s, disc_loss=0.378, gen_loss=2.13] Training Epoch 217/501: 100%|██████████| 98/98 [00:03<00:00, 26.51it/s, disc_loss=0.516, gen_loss=2.92] Training Epoch 218/501: 100%|██████████| 98/98 [00:02<00:00, 43.30it/s, gen_loss=0.0666] Training Epoch 219/501: 100%|██████████| 98/98 [00:04<00:00, 22.75it/s, disc_loss=0.39, gen_loss=1.83] Training Epoch 220/501: 100%|██████████| 98/98 [00:04<00:00, 20.79it/s, disc_loss=0.388, gen_loss=2.17] Training Epoch 221/501: 100%|██████████| 98/98 [00:03<00:00, 25.36it/s, disc_loss=0.345, gen_loss=2.46] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 84.88069915771484, KID: 0.07181853801012039
Training Epoch 222/501: 100%|██████████| 98/98 [00:03<00:00, 31.69it/s, gen_loss=0.874] Training Epoch 223/501: 100%|██████████| 98/98 [00:01<00:00, 52.70it/s, gen_loss=0.344] Training Epoch 224/501: 100%|██████████| 98/98 [00:02<00:00, 42.35it/s, disc_loss=0.35, gen_loss=2.55] Training Epoch 225/501: 100%|██████████| 98/98 [00:04<00:00, 21.90it/s, gen_loss=1.76] Training Epoch 226/501: 100%|██████████| 98/98 [00:03<00:00, 27.96it/s, disc_loss=0.381, gen_loss=2.09] Training Epoch 227/501: 100%|██████████| 98/98 [00:04<00:00, 20.28it/s, disc_loss=0.364, gen_loss=2.71] Training Epoch 228/501: 100%|██████████| 98/98 [00:02<00:00, 43.07it/s, gen_loss=1.17] Training Epoch 229/501: 100%|██████████| 98/98 [00:01<00:00, 52.78it/s, gen_loss=0.0756] Training Epoch 230/501: 100%|██████████| 98/98 [00:04<00:00, 20.87it/s, disc_loss=0.369, gen_loss=1.94] Training Epoch 231/501: 100%|██████████| 98/98 [00:03<00:00, 24.58it/s, gen_loss=0.375] Training Epoch 232/501: 100%|██████████| 98/98 [00:04<00:00, 21.04it/s, disc_loss=0.367, gen_loss=2.12] Training Epoch 233/501: 100%|██████████| 98/98 [00:03<00:00, 26.55it/s, gen_loss=1.84] Training Epoch 234/501: 100%|██████████| 98/98 [00:01<00:00, 50.17it/s, gen_loss=0.208] Training Epoch 235/501: 100%|██████████| 98/98 [00:03<00:00, 31.62it/s, gen_loss=0.833] Training Epoch 236/501: 100%|██████████| 98/98 [00:04<00:00, 23.06it/s, gen_loss=0.92] Training Epoch 237/501: 100%|██████████| 98/98 [00:04<00:00, 22.08it/s, disc_loss=0.362, gen_loss=2.47] Training Epoch 238/501: 100%|██████████| 98/98 [00:04<00:00, 23.91it/s, gen_loss=1.26] Training Epoch 239/501: 100%|██████████| 98/98 [00:02<00:00, 42.32it/s, gen_loss=0.632] Training Epoch 240/501: 100%|██████████| 98/98 [00:02<00:00, 46.25it/s, disc_loss=0.501, gen_loss=2.92] Training Epoch 241/501: 100%|██████████| 98/98 [00:04<00:00, 21.83it/s, disc_loss=0.441, gen_loss=1.65] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 57.13568878173828, KID: 0.04434883967041969
Training Epoch 242/501: 100%|██████████| 98/98 [00:03<00:00, 25.30it/s, disc_loss=0.381, gen_loss=1.8] Training Epoch 243/501: 100%|██████████| 98/98 [00:04<00:00, 21.39it/s, gen_loss=1.49] Training Epoch 244/501: 100%|██████████| 98/98 [00:02<00:00, 36.35it/s, disc_loss=1.21, gen_loss=3.31] Training Epoch 245/501: 100%|██████████| 98/98 [00:02<00:00, 41.54it/s, gen_loss=0.0613] Training Epoch 246/501: 100%|██████████| 98/98 [00:04<00:00, 23.72it/s, disc_loss=0.379, gen_loss=2.16] Training Epoch 247/501: 100%|██████████| 98/98 [00:04<00:00, 24.31it/s, disc_loss=0.361, gen_loss=2.16] Training Epoch 248/501: 100%|██████████| 98/98 [00:04<00:00, 20.43it/s, disc_loss=0.399, gen_loss=1.96] Training Epoch 249/501: 100%|██████████| 98/98 [00:03<00:00, 25.89it/s, disc_loss=0.505, gen_loss=1.81] Training Epoch 250/501: 100%|██████████| 98/98 [00:02<00:00, 35.58it/s, gen_loss=0.118] Training Epoch 251/501: 100%|██████████| 98/98 [00:03<00:00, 30.04it/s, disc_loss=0.369, gen_loss=2.44]
Training Epoch 252/501: 100%|██████████| 98/98 [00:04<00:00, 21.68it/s, disc_loss=0.682, gen_loss=2.92] Training Epoch 253/501: 100%|██████████| 98/98 [00:04<00:00, 23.18it/s, gen_loss=0.846] Training Epoch 254/501: 100%|██████████| 98/98 [00:03<00:00, 31.73it/s, gen_loss=0.698] Training Epoch 255/501: 100%|██████████| 98/98 [00:02<00:00, 46.32it/s, gen_loss=0.531] Training Epoch 256/501: 100%|██████████| 98/98 [00:02<00:00, 43.66it/s, disc_loss=0.377, gen_loss=3.33] Training Epoch 257/501: 100%|██████████| 98/98 [00:04<00:00, 20.17it/s, disc_loss=0.385, gen_loss=2.11] Training Epoch 258/501: 100%|██████████| 98/98 [00:03<00:00, 25.67it/s, disc_loss=0.365, gen_loss=1.47] Training Epoch 259/501: 100%|██████████| 98/98 [00:04<00:00, 20.80it/s, disc_loss=0.384, gen_loss=2.11] Training Epoch 260/501: 100%|██████████| 98/98 [00:02<00:00, 33.40it/s, gen_loss=0.656] Training Epoch 261/501: 100%|██████████| 98/98 [00:02<00:00, 48.53it/s, gen_loss=0.284] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 67.94567108154297, KID: 0.048751965165138245
Training Epoch 262/501: 100%|██████████| 98/98 [00:03<00:00, 26.22it/s, disc_loss=0.345, gen_loss=2.21] Training Epoch 263/501: 100%|██████████| 98/98 [00:03<00:00, 27.56it/s, disc_loss=0.384, gen_loss=1.55] Training Epoch 264/501: 100%|██████████| 98/98 [00:04<00:00, 23.75it/s, disc_loss=0.367, gen_loss=2.1] Training Epoch 265/501: 100%|██████████| 98/98 [00:03<00:00, 25.92it/s, disc_loss=0.508, gen_loss=1.17] Training Epoch 266/501: 100%|██████████| 98/98 [00:02<00:00, 34.32it/s, gen_loss=0.245] Training Epoch 267/501: 100%|██████████| 98/98 [00:02<00:00, 36.54it/s, disc_loss=0.387, gen_loss=1.77] Training Epoch 268/501: 100%|██████████| 98/98 [00:04<00:00, 21.54it/s, disc_loss=0.364, gen_loss=1.91] Training Epoch 269/501: 100%|██████████| 98/98 [00:03<00:00, 26.39it/s, disc_loss=0.395, gen_loss=2.28] Training Epoch 270/501: 100%|██████████| 98/98 [00:04<00:00, 22.07it/s, disc_loss=0.392, gen_loss=1.7] Training Epoch 271/501: 100%|██████████| 98/98 [00:03<00:00, 30.05it/s, gen_loss=0.666] Training Epoch 272/501: 100%|██████████| 98/98 [00:01<00:00, 50.72it/s, gen_loss=0.0504] Training Epoch 273/501: 100%|██████████| 98/98 [00:04<00:00, 20.22it/s, disc_loss=0.403, gen_loss=2.29] Training Epoch 274/501: 100%|██████████| 98/98 [00:03<00:00, 24.92it/s, disc_loss=0.365, gen_loss=2.06] Training Epoch 275/501: 100%|██████████| 98/98 [00:04<00:00, 20.88it/s, disc_loss=0.372, gen_loss=1.93] Training Epoch 276/501: 100%|██████████| 98/98 [00:03<00:00, 31.88it/s, disc_loss=0.357, gen_loss=2.3] Training Epoch 277/501: 100%|██████████| 98/98 [00:01<00:00, 49.64it/s, gen_loss=0.366] Training Epoch 278/501: 100%|██████████| 98/98 [00:02<00:00, 33.27it/s, disc_loss=0.344, gen_loss=2.38] Training Epoch 279/501: 100%|██████████| 98/98 [00:03<00:00, 28.39it/s, disc_loss=0.347, gen_loss=2.24] Training Epoch 280/501: 100%|██████████| 98/98 [00:03<00:00, 24.69it/s, disc_loss=0.344, gen_loss=2.13] Training Epoch 281/501: 100%|██████████| 98/98 [00:04<00:00, 24.47it/s, disc_loss=0.399, gen_loss=2.01] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 63.32770538330078, KID: 0.054315175861120224
Training Epoch 282/501: 100%|██████████| 98/98 [00:03<00:00, 28.85it/s, gen_loss=0.973] Training Epoch 283/501: 100%|██████████| 98/98 [00:01<00:00, 49.63it/s, gen_loss=0.0852] Training Epoch 284/501: 100%|██████████| 98/98 [00:04<00:00, 21.02it/s, gen_loss=0.455] Training Epoch 285/501: 100%|██████████| 98/98 [00:03<00:00, 26.79it/s, disc_loss=0.36, gen_loss=2.28] Training Epoch 286/501: 100%|██████████| 98/98 [00:04<00:00, 22.29it/s, gen_loss=1.73] Training Epoch 287/501: 100%|██████████| 98/98 [00:03<00:00, 24.71it/s, gen_loss=1.11] Training Epoch 288/501: 100%|██████████| 98/98 [00:01<00:00, 49.83it/s, gen_loss=0.191] Training Epoch 289/501: 100%|██████████| 98/98 [00:03<00:00, 25.36it/s, disc_loss=0.35, gen_loss=2.46] Training Epoch 290/501: 100%|██████████| 98/98 [00:04<00:00, 24.14it/s, gen_loss=1.21] Training Epoch 291/501: 100%|██████████| 98/98 [00:04<00:00, 21.25it/s, gen_loss=0.89] Training Epoch 292/501: 100%|██████████| 98/98 [00:03<00:00, 29.65it/s, disc_loss=0.547, gen_loss=2.77] Training Epoch 293/501: 100%|██████████| 98/98 [00:03<00:00, 31.88it/s, gen_loss=0.173] Training Epoch 294/501: 100%|██████████| 98/98 [00:02<00:00, 33.49it/s, disc_loss=0.376, gen_loss=1.98] Training Epoch 295/501: 100%|██████████| 98/98 [00:04<00:00, 22.72it/s, disc_loss=0.371, gen_loss=2.04] Training Epoch 296/501: 100%|██████████| 98/98 [00:04<00:00, 20.19it/s, disc_loss=0.368, gen_loss=2.24] Training Epoch 297/501: 100%|██████████| 98/98 [00:03<00:00, 25.61it/s, gen_loss=1.74] Training Epoch 298/501: 100%|██████████| 98/98 [00:03<00:00, 27.53it/s, gen_loss=0.359] Training Epoch 299/501: 100%|██████████| 98/98 [00:02<00:00, 45.84it/s, disc_loss=0.405, gen_loss=2.45] Training Epoch 300/501: 100%|██████████| 98/98 [00:04<00:00, 20.89it/s, disc_loss=0.369, gen_loss=2.09] Training Epoch 301/501: 100%|██████████| 98/98 [00:03<00:00, 25.34it/s, disc_loss=0.357, gen_loss=1.83] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 59.989845275878906, KID: 0.04810653254389763
Training Epoch 302/501: 100%|██████████| 98/98 [00:04<00:00, 22.69it/s, gen_loss=0.531] Training Epoch 303/501: 100%|██████████| 98/98 [00:03<00:00, 28.76it/s, gen_loss=1.65] Training Epoch 304/501: 100%|██████████| 98/98 [00:02<00:00, 45.83it/s, gen_loss=0.0815] Training Epoch 305/501: 100%|██████████| 98/98 [00:04<00:00, 23.86it/s, disc_loss=0.368, gen_loss=2.34] Training Epoch 306/501: 100%|██████████| 98/98 [00:03<00:00, 30.06it/s, gen_loss=0.593] Training Epoch 307/501: 100%|██████████| 98/98 [00:04<00:00, 22.99it/s, gen_loss=0.321] Training Epoch 308/501: 100%|██████████| 98/98 [00:03<00:00, 25.07it/s, disc_loss=0.369, gen_loss=2.17] Training Epoch 309/501: 100%|██████████| 98/98 [00:02<00:00, 37.86it/s, gen_loss=0.112] Training Epoch 310/501: 100%|██████████| 98/98 [00:03<00:00, 29.37it/s, disc_loss=0.364, gen_loss=2.45] Training Epoch 311/501: 100%|██████████| 98/98 [00:04<00:00, 22.90it/s, disc_loss=0.369, gen_loss=2.37] Training Epoch 312/501: 100%|██████████| 98/98 [00:04<00:00, 22.34it/s, disc_loss=0.377, gen_loss=2.06] Training Epoch 313/501: 100%|██████████| 98/98 [00:03<00:00, 30.73it/s, disc_loss=0.574, gen_loss=2.6] Training Epoch 314/501: 100%|██████████| 98/98 [00:03<00:00, 26.71it/s, gen_loss=0.153] Training Epoch 315/501: 100%|██████████| 98/98 [00:02<00:00, 38.85it/s, disc_loss=0.384, gen_loss=1.95] Training Epoch 316/501: 100%|██████████| 98/98 [00:04<00:00, 20.89it/s, disc_loss=0.435, gen_loss=3.43] Training Epoch 317/501: 100%|██████████| 98/98 [00:04<00:00, 23.74it/s, disc_loss=0.364, gen_loss=2.1] Training Epoch 318/501: 100%|██████████| 98/98 [00:03<00:00, 27.23it/s, disc_loss=0.388, gen_loss=2.17] Training Epoch 319/501: 100%|██████████| 98/98 [00:04<00:00, 22.54it/s, gen_loss=1.24] Training Epoch 320/501: 100%|██████████| 98/98 [00:01<00:00, 49.50it/s, gen_loss=0.0531] Training Epoch 321/501: 100%|██████████| 98/98 [00:04<00:00, 22.77it/s, disc_loss=0.35, gen_loss=2.37] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 56.3382453918457, KID: 0.04644738510251045
Training Epoch 322/501: 100%|██████████| 98/98 [00:03<00:00, 24.72it/s, disc_loss=0.363, gen_loss=2.59] Training Epoch 323/501: 100%|██████████| 98/98 [00:04<00:00, 22.51it/s, gen_loss=2.15] Training Epoch 324/501: 100%|██████████| 98/98 [00:03<00:00, 25.24it/s, gen_loss=1.29] Training Epoch 325/501: 100%|██████████| 98/98 [00:02<00:00, 42.05it/s, gen_loss=0.12] Training Epoch 326/501: 100%|██████████| 98/98 [00:03<00:00, 25.24it/s, disc_loss=0.383, gen_loss=2.11] Training Epoch 327/501: 100%|██████████| 98/98 [00:04<00:00, 20.61it/s, disc_loss=0.377, gen_loss=1.7] Training Epoch 328/501: 100%|██████████| 98/98 [00:04<00:00, 23.90it/s, gen_loss=1.22] Training Epoch 329/501: 100%|██████████| 98/98 [00:02<00:00, 33.81it/s, disc_loss=0.369, gen_loss=2.19] Training Epoch 330/501: 100%|██████████| 98/98 [00:03<00:00, 32.40it/s, gen_loss=0.132] Training Epoch 331/501: 100%|██████████| 98/98 [00:02<00:00, 34.28it/s, disc_loss=0.362, gen_loss=2.03] Training Epoch 332/501: 100%|██████████| 98/98 [00:04<00:00, 21.00it/s, disc_loss=0.408, gen_loss=1.6] Training Epoch 333/501: 100%|██████████| 98/98 [00:04<00:00, 21.69it/s, disc_loss=0.37, gen_loss=1.87] Training Epoch 334/501: 100%|██████████| 98/98 [00:03<00:00, 26.22it/s, disc_loss=0.357, gen_loss=2.42] Training Epoch 335/501: 100%|██████████| 98/98 [00:03<00:00, 26.27it/s, gen_loss=0.361] Training Epoch 336/501: 100%|██████████| 98/98 [00:02<00:00, 45.13it/s, disc_loss=0.932, gen_loss=2.67] Training Epoch 337/501: 100%|██████████| 98/98 [00:04<00:00, 20.19it/s, gen_loss=0.647] Training Epoch 338/501: 100%|██████████| 98/98 [00:04<00:00, 22.48it/s, disc_loss=0.371, gen_loss=2.18] Training Epoch 339/501: 100%|██████████| 98/98 [00:04<00:00, 22.61it/s, gen_loss=1.02] Training Epoch 340/501: 100%|██████████| 98/98 [00:03<00:00, 26.65it/s, gen_loss=0.652] Training Epoch 341/501: 100%|██████████| 98/98 [00:02<00:00, 47.42it/s, gen_loss=0.0898] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 60.95038986206055, KID: 0.05010538920760155
Training Epoch 342/501: 100%|██████████| 98/98 [00:03<00:00, 25.23it/s, disc_loss=0.38, gen_loss=2.36] Training Epoch 343/501: 100%|██████████| 98/98 [00:04<00:00, 22.64it/s, gen_loss=0.66] Training Epoch 344/501: 100%|██████████| 98/98 [00:04<00:00, 21.78it/s, gen_loss=1.39] Training Epoch 345/501: 100%|██████████| 98/98 [00:02<00:00, 35.54it/s, disc_loss=0.357, gen_loss=2.04] Training Epoch 346/501: 100%|██████████| 98/98 [00:02<00:00, 41.20it/s, gen_loss=0.314] Training Epoch 347/501: 100%|██████████| 98/98 [00:03<00:00, 29.16it/s, disc_loss=0.345, gen_loss=2.36] Training Epoch 348/501: 100%|██████████| 98/98 [00:04<00:00, 24.16it/s, disc_loss=0.344, gen_loss=2.41] Training Epoch 349/501: 100%|██████████| 98/98 [00:03<00:00, 32.14it/s, disc_loss=0.342, gen_loss=2.34] Training Epoch 350/501: 100%|██████████| 98/98 [00:02<00:00, 38.00it/s, gen_loss=0.953] Training Epoch 351/501: 100%|██████████| 98/98 [00:02<00:00, 43.42it/s, gen_loss=1.81]
Training Epoch 352/501: 100%|██████████| 98/98 [00:01<00:00, 51.04it/s, gen_loss=0.814] Training Epoch 353/501: 100%|██████████| 98/98 [00:01<00:00, 49.78it/s, gen_loss=0.268] Training Epoch 354/501: 100%|██████████| 98/98 [00:03<00:00, 31.38it/s, gen_loss=1.3] Training Epoch 355/501: 100%|██████████| 98/98 [00:03<00:00, 28.02it/s, gen_loss=2.06] Training Epoch 356/501: 100%|██████████| 98/98 [00:02<00:00, 47.64it/s, disc_loss=0.346, gen_loss=2.85] Training Epoch 357/501: 100%|██████████| 98/98 [00:04<00:00, 21.16it/s, gen_loss=1.27] Training Epoch 358/501: 100%|██████████| 98/98 [00:01<00:00, 49.30it/s, gen_loss=0.483] Training Epoch 359/501: 100%|██████████| 98/98 [00:02<00:00, 42.00it/s, disc_loss=0.403, gen_loss=2.21] Training Epoch 360/501: 100%|██████████| 98/98 [00:03<00:00, 28.94it/s, disc_loss=0.34, gen_loss=2.31] Training Epoch 361/501: 100%|██████████| 98/98 [00:04<00:00, 23.54it/s, gen_loss=1.39] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 133.918212890625, KID: 0.13439655303955078
Training Epoch 362/501: 100%|██████████| 98/98 [00:04<00:00, 22.40it/s, disc_loss=0.355, gen_loss=2.24] Training Epoch 363/501: 100%|██████████| 98/98 [00:04<00:00, 21.23it/s, gen_loss=1.14] Training Epoch 364/501: 100%|██████████| 98/98 [00:01<00:00, 49.81it/s, gen_loss=0.23] Training Epoch 365/501: 100%|██████████| 98/98 [00:03<00:00, 30.54it/s, gen_loss=1.67] Training Epoch 366/501: 100%|██████████| 98/98 [00:04<00:00, 21.24it/s, disc_loss=0.358, gen_loss=2.53] Training Epoch 367/501: 100%|██████████| 98/98 [00:03<00:00, 28.57it/s, disc_loss=0.356, gen_loss=2.09] Training Epoch 368/501: 100%|██████████| 98/98 [00:04<00:00, 21.09it/s, disc_loss=0.363, gen_loss=2.27] Training Epoch 369/501: 100%|██████████| 98/98 [00:02<00:00, 33.38it/s, gen_loss=1.73] Training Epoch 370/501: 100%|██████████| 98/98 [00:02<00:00, 47.69it/s, disc_loss=0.472, gen_loss=1.27] Training Epoch 371/501: 100%|██████████| 98/98 [00:04<00:00, 23.29it/s, disc_loss=0.354, gen_loss=2.21] Training Epoch 372/501: 100%|██████████| 98/98 [00:03<00:00, 28.05it/s, disc_loss=0.365, gen_loss=2.14] Training Epoch 373/501: 100%|██████████| 98/98 [00:04<00:00, 21.68it/s, gen_loss=1.07] Training Epoch 374/501: 100%|██████████| 98/98 [00:03<00:00, 30.44it/s, gen_loss=4.14] Training Epoch 375/501: 100%|██████████| 98/98 [00:01<00:00, 49.87it/s, gen_loss=0.126] Training Epoch 376/501: 100%|██████████| 98/98 [00:03<00:00, 24.80it/s, disc_loss=0.381, gen_loss=2.03] Training Epoch 377/501: 100%|██████████| 98/98 [00:04<00:00, 22.30it/s, gen_loss=0.406] Training Epoch 378/501: 100%|██████████| 98/98 [00:04<00:00, 21.34it/s, disc_loss=0.364, gen_loss=2.05] Training Epoch 379/501: 100%|██████████| 98/98 [00:04<00:00, 22.68it/s, gen_loss=1.33] Training Epoch 380/501: 100%|██████████| 98/98 [00:01<00:00, 49.33it/s, gen_loss=0.46] Training Epoch 381/501: 100%|██████████| 98/98 [00:02<00:00, 47.95it/s, gen_loss=0.748] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 82.82144165039062, KID: 0.06408078223466873
Training Epoch 382/501: 100%|██████████| 98/98 [00:04<00:00, 23.73it/s, gen_loss=2.06] Training Epoch 383/501: 100%|██████████| 98/98 [00:02<00:00, 38.03it/s, gen_loss=1.85] Training Epoch 384/501: 100%|██████████| 98/98 [00:03<00:00, 31.18it/s, disc_loss=0.347, gen_loss=1.95] Training Epoch 385/501: 100%|██████████| 98/98 [00:02<00:00, 34.26it/s, gen_loss=0.903] Training Epoch 386/501: 100%|██████████| 98/98 [00:01<00:00, 49.50it/s, gen_loss=0.216] Training Epoch 387/501: 100%|██████████| 98/98 [00:03<00:00, 26.08it/s, disc_loss=0.392, gen_loss=2.41] Training Epoch 388/501: 100%|██████████| 98/98 [00:03<00:00, 27.25it/s, gen_loss=0.776] Training Epoch 389/501: 100%|██████████| 98/98 [00:04<00:00, 24.09it/s, gen_loss=2.6] Training Epoch 390/501: 100%|██████████| 98/98 [00:04<00:00, 23.64it/s, gen_loss=0.844] Training Epoch 391/501: 100%|██████████| 98/98 [00:02<00:00, 48.14it/s, gen_loss=0.349] Training Epoch 392/501: 100%|██████████| 98/98 [00:03<00:00, 31.38it/s, disc_loss=0.366, gen_loss=2.31] Training Epoch 393/501: 100%|██████████| 98/98 [00:04<00:00, 21.83it/s, gen_loss=0.692] Training Epoch 394/501: 100%|██████████| 98/98 [00:04<00:00, 23.53it/s, disc_loss=0.385, gen_loss=1.93] Training Epoch 395/501: 100%|██████████| 98/98 [00:04<00:00, 22.49it/s, gen_loss=1.77] Training Epoch 396/501: 100%|██████████| 98/98 [00:02<00:00, 37.11it/s, gen_loss=0.733] Training Epoch 397/501: 100%|██████████| 98/98 [00:01<00:00, 49.89it/s, disc_loss=0.58, gen_loss=1.02] Training Epoch 398/501: 100%|██████████| 98/98 [00:04<00:00, 21.10it/s, disc_loss=0.369, gen_loss=2.7] Training Epoch 399/501: 100%|██████████| 98/98 [00:03<00:00, 27.42it/s, disc_loss=0.354, gen_loss=2.35] Training Epoch 400/501: 100%|██████████| 98/98 [00:04<00:00, 21.53it/s, disc_loss=0.355, gen_loss=2.26] Training Epoch 401/501: 100%|██████████| 98/98 [00:02<00:00, 34.36it/s, gen_loss=0.21] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 55.0870475769043, KID: 0.042650409042835236
Training Epoch 402/501: 100%|██████████| 98/98 [00:02<00:00, 38.58it/s, gen_loss=0.133] Training Epoch 403/501: 100%|██████████| 98/98 [00:03<00:00, 25.68it/s, disc_loss=0.363, gen_loss=2.12] Training Epoch 404/501: 100%|██████████| 98/98 [00:03<00:00, 28.03it/s, gen_loss=0.845] Training Epoch 405/501: 100%|██████████| 98/98 [00:04<00:00, 24.20it/s, disc_loss=0.352, gen_loss=2.13] Training Epoch 406/501: 100%|██████████| 98/98 [00:03<00:00, 26.55it/s, gen_loss=1.34] Training Epoch 407/501: 100%|██████████| 98/98 [00:01<00:00, 51.03it/s, gen_loss=0.252] Training Epoch 408/501: 100%|██████████| 98/98 [00:02<00:00, 37.68it/s, gen_loss=2.36] Training Epoch 409/501: 100%|██████████| 98/98 [00:02<00:00, 44.55it/s, gen_loss=1.49] Training Epoch 410/501: 100%|██████████| 98/98 [00:02<00:00, 34.53it/s, disc_loss=0.34, gen_loss=2.28] Training Epoch 411/501: 100%|██████████| 98/98 [00:02<00:00, 45.93it/s, gen_loss=0.976] Training Epoch 412/501: 100%|██████████| 98/98 [00:02<00:00, 40.56it/s, gen_loss=0.918] Training Epoch 413/501: 100%|██████████| 98/98 [00:02<00:00, 47.89it/s, gen_loss=0.241] Training Epoch 414/501: 100%|██████████| 98/98 [00:03<00:00, 24.72it/s, disc_loss=0.339, gen_loss=2.35] Training Epoch 415/501: 100%|██████████| 98/98 [00:03<00:00, 28.37it/s, gen_loss=1.46] Training Epoch 416/501: 100%|██████████| 98/98 [00:04<00:00, 22.33it/s, gen_loss=2.12] Training Epoch 417/501: 100%|██████████| 98/98 [00:03<00:00, 27.20it/s, gen_loss=2.47] Training Epoch 418/501: 100%|██████████| 98/98 [00:02<00:00, 46.55it/s, gen_loss=0.158] Training Epoch 419/501: 100%|██████████| 98/98 [00:02<00:00, 32.71it/s, disc_loss=0.359, gen_loss=2.05] Training Epoch 420/501: 100%|██████████| 98/98 [00:04<00:00, 20.41it/s, disc_loss=0.394, gen_loss=1.05] Training Epoch 421/501: 100%|██████████| 98/98 [00:04<00:00, 24.07it/s, disc_loss=0.372, gen_loss=1.68] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 48.78739547729492, KID: 0.036700546741485596
Training Epoch 422/501: 100%|██████████| 98/98 [00:04<00:00, 23.16it/s, disc_loss=0.355, gen_loss=2.32] Training Epoch 423/501: 100%|██████████| 98/98 [00:02<00:00, 33.37it/s, gen_loss=0.8] Training Epoch 424/501: 100%|██████████| 98/98 [00:01<00:00, 50.12it/s, gen_loss=0.131] Training Epoch 425/501: 100%|██████████| 98/98 [00:04<00:00, 23.15it/s, disc_loss=0.355, gen_loss=2.54] Training Epoch 426/501: 100%|██████████| 98/98 [00:03<00:00, 28.43it/s, disc_loss=0.401, gen_loss=1.96] Training Epoch 427/501: 100%|██████████| 98/98 [00:03<00:00, 25.43it/s, gen_loss=0.917] Training Epoch 428/501: 100%|██████████| 98/98 [00:03<00:00, 26.28it/s, gen_loss=0.876] Training Epoch 429/501: 100%|██████████| 98/98 [00:02<00:00, 47.19it/s, gen_loss=0.219] Training Epoch 430/501: 100%|██████████| 98/98 [00:03<00:00, 30.26it/s, disc_loss=0.368, gen_loss=2.12] Training Epoch 431/501: 100%|██████████| 98/98 [00:04<00:00, 21.70it/s, gen_loss=0.833] Training Epoch 432/501: 100%|██████████| 98/98 [00:03<00:00, 25.81it/s, disc_loss=0.362, gen_loss=2.36] Training Epoch 433/501: 100%|██████████| 98/98 [00:04<00:00, 21.66it/s, disc_loss=0.351, gen_loss=2.01] Training Epoch 434/501: 100%|██████████| 98/98 [00:02<00:00, 46.90it/s, gen_loss=0.394] Training Epoch 435/501: 100%|██████████| 98/98 [00:01<00:00, 52.37it/s, disc_loss=1.3, gen_loss=1.38] Training Epoch 436/501: 100%|██████████| 98/98 [00:05<00:00, 19.01it/s, disc_loss=0.355, gen_loss=2.31] Training Epoch 437/501: 100%|██████████| 98/98 [00:03<00:00, 26.86it/s, disc_loss=0.361, gen_loss=1.93] Training Epoch 438/501: 100%|██████████| 98/98 [00:04<00:00, 21.83it/s, disc_loss=0.346, gen_loss=2.3] Training Epoch 439/501: 100%|██████████| 98/98 [00:03<00:00, 28.86it/s, gen_loss=0.989] Training Epoch 440/501: 100%|██████████| 98/98 [00:02<00:00, 48.08it/s, gen_loss=0.0623] Training Epoch 441/501: 100%|██████████| 98/98 [00:04<00:00, 22.02it/s, disc_loss=0.371, gen_loss=2.27] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 40.46397018432617, KID: 0.03264511376619339
Training Epoch 442/501: 100%|██████████| 98/98 [00:03<00:00, 27.90it/s, gen_loss=0.729] Training Epoch 443/501: 100%|██████████| 98/98 [00:04<00:00, 22.76it/s, gen_loss=1.51] Training Epoch 444/501: 100%|██████████| 98/98 [00:03<00:00, 31.77it/s, gen_loss=0.852] Training Epoch 445/501: 100%|██████████| 98/98 [00:02<00:00, 39.38it/s, gen_loss=0.276] Training Epoch 446/501: 100%|██████████| 98/98 [00:02<00:00, 32.69it/s, disc_loss=0.347, gen_loss=2.25] Training Epoch 447/501: 100%|██████████| 98/98 [00:03<00:00, 28.99it/s, gen_loss=1.34] Training Epoch 448/501: 100%|██████████| 98/98 [00:02<00:00, 48.79it/s, gen_loss=0.772] Training Epoch 449/501: 100%|██████████| 98/98 [00:02<00:00, 35.68it/s, gen_loss=1.52] Training Epoch 450/501: 100%|██████████| 98/98 [00:02<00:00, 48.35it/s, gen_loss=0.881] Training Epoch 451/501: 100%|██████████| 98/98 [00:01<00:00, 52.13it/s, gen_loss=0.538]
Training Epoch 452/501: 100%|██████████| 98/98 [00:02<00:00, 39.74it/s, disc_loss=0.342, gen_loss=2.13] Training Epoch 453/501: 100%|██████████| 98/98 [00:03<00:00, 30.44it/s, gen_loss=1.39] Training Epoch 454/501: 100%|██████████| 98/98 [00:02<00:00, 43.89it/s, disc_loss=0.336, gen_loss=2.15] Training Epoch 455/501: 100%|██████████| 98/98 [00:03<00:00, 25.40it/s, gen_loss=1.96] Training Epoch 456/501: 100%|██████████| 98/98 [00:02<00:00, 48.01it/s, gen_loss=0.983] Training Epoch 457/501: 100%|██████████| 98/98 [00:01<00:00, 51.51it/s, gen_loss=0.402] Training Epoch 458/501: 100%|██████████| 98/98 [00:03<00:00, 31.55it/s, disc_loss=0.338, gen_loss=2.19] Training Epoch 459/501: 100%|██████████| 98/98 [00:03<00:00, 27.58it/s, gen_loss=0.974] Training Epoch 460/501: 100%|██████████| 98/98 [00:03<00:00, 24.80it/s, disc_loss=0.334, gen_loss=2.31] Training Epoch 461/501: 100%|██████████| 98/98 [00:05<00:00, 18.36it/s, disc_loss=0.337, gen_loss=2.25] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 259.589599609375, KID: 0.3383238613605499
Training Epoch 462/501: 100%|██████████| 98/98 [00:01<00:00, 52.14it/s, gen_loss=0.264] Training Epoch 463/501: 100%|██████████| 98/98 [00:02<00:00, 47.17it/s, disc_loss=0.387, gen_loss=3.07] Training Epoch 464/501: 100%|██████████| 98/98 [00:05<00:00, 19.14it/s, disc_loss=0.342, gen_loss=2.18] Training Epoch 465/501: 100%|██████████| 98/98 [00:03<00:00, 25.65it/s, disc_loss=0.361, gen_loss=1.92] Training Epoch 466/501: 100%|██████████| 98/98 [00:05<00:00, 19.38it/s, disc_loss=0.369, gen_loss=2.06] Training Epoch 467/501: 100%|██████████| 98/98 [00:03<00:00, 29.69it/s, gen_loss=0.407] Training Epoch 468/501: 100%|██████████| 98/98 [00:01<00:00, 49.27it/s, gen_loss=0.0903] Training Epoch 469/501: 100%|██████████| 98/98 [00:04<00:00, 21.61it/s, disc_loss=0.353, gen_loss=2.17] Training Epoch 470/501: 100%|██████████| 98/98 [00:04<00:00, 23.42it/s, disc_loss=0.349, gen_loss=2.51] Training Epoch 471/501: 100%|██████████| 98/98 [00:05<00:00, 18.59it/s, disc_loss=0.346, gen_loss=2.3] Training Epoch 472/501: 100%|██████████| 98/98 [00:04<00:00, 21.49it/s, disc_loss=0.368, gen_loss=2.11] Training Epoch 473/501: 100%|██████████| 98/98 [00:01<00:00, 51.84it/s, gen_loss=0.112] Training Epoch 474/501: 100%|██████████| 98/98 [00:03<00:00, 27.56it/s, disc_loss=0.364, gen_loss=1.94] Training Epoch 475/501: 100%|██████████| 98/98 [00:04<00:00, 20.33it/s, gen_loss=0.979] Training Epoch 476/501: 100%|██████████| 98/98 [00:04<00:00, 22.63it/s, disc_loss=0.357, gen_loss=2.4] Training Epoch 477/501: 100%|██████████| 98/98 [00:04<00:00, 20.08it/s, disc_loss=0.351, gen_loss=2.42] Training Epoch 478/501: 100%|██████████| 98/98 [00:02<00:00, 34.80it/s, gen_loss=0.249] Training Epoch 479/501: 100%|██████████| 98/98 [00:02<00:00, 46.29it/s, disc_loss=0.371, gen_loss=3.16] Training Epoch 480/501: 100%|██████████| 98/98 [00:05<00:00, 18.84it/s, disc_loss=0.367, gen_loss=2.71] Training Epoch 481/501: 100%|██████████| 98/98 [00:04<00:00, 22.93it/s, disc_loss=0.356, gen_loss=1.9] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 54.721527099609375, KID: 0.04857427999377251
Training Epoch 482/501: 100%|██████████| 98/98 [00:04<00:00, 19.89it/s, disc_loss=0.361, gen_loss=2.09] Training Epoch 483/501: 100%|██████████| 98/98 [00:03<00:00, 24.85it/s, gen_loss=0.299] Training Epoch 484/501: 100%|██████████| 98/98 [00:02<00:00, 47.80it/s, gen_loss=0.0794] Training Epoch 485/501: 100%|██████████| 98/98 [00:04<00:00, 24.04it/s, disc_loss=0.349, gen_loss=2.83] Training Epoch 486/501: 100%|██████████| 98/98 [00:04<00:00, 23.95it/s, gen_loss=0.996] Training Epoch 487/501: 100%|██████████| 98/98 [00:04<00:00, 21.41it/s, disc_loss=0.343, gen_loss=2.41] Training Epoch 488/501: 100%|██████████| 98/98 [00:04<00:00, 19.80it/s, gen_loss=1.28] Training Epoch 489/501: 100%|██████████| 98/98 [00:01<00:00, 50.38it/s, gen_loss=0.0726] Training Epoch 490/501: 100%|██████████| 98/98 [00:03<00:00, 27.48it/s, disc_loss=0.364, gen_loss=2.31] Training Epoch 491/501: 100%|██████████| 98/98 [00:05<00:00, 18.80it/s, disc_loss=0.352, gen_loss=2.12] Training Epoch 492/501: 100%|██████████| 98/98 [00:03<00:00, 27.41it/s, disc_loss=0.357, gen_loss=2.42] Training Epoch 493/501: 100%|██████████| 98/98 [00:05<00:00, 19.49it/s, disc_loss=0.364, gen_loss=2.43] Training Epoch 494/501: 100%|██████████| 98/98 [00:02<00:00, 35.64it/s, gen_loss=0.159] Training Epoch 495/501: 100%|██████████| 98/98 [00:02<00:00, 39.20it/s, disc_loss=0.389, gen_loss=1.69] Training Epoch 496/501: 100%|██████████| 98/98 [00:05<00:00, 18.66it/s, disc_loss=0.357, gen_loss=2.14] Training Epoch 497/501: 100%|██████████| 98/98 [00:03<00:00, 29.43it/s, disc_loss=0.366, gen_loss=2.26] Training Epoch 498/501: 100%|██████████| 98/98 [00:05<00:00, 19.59it/s, disc_loss=0.367, gen_loss=2.73] Training Epoch 499/501: 100%|██████████| 98/98 [00:04<00:00, 24.35it/s, gen_loss=0.208] Training Epoch 500/501: 100%|██████████| 98/98 [00:02<00:00, 48.73it/s, gen_loss=0.0554] Training Epoch 501/501: 100%|██████████| 98/98 [00:04<00:00, 20.13it/s, disc_loss=0.355, gen_loss=2.17] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 44.06201934814453, KID: 0.0339730903506279
<Figure size 1200x600 with 0 Axes>
Observations:¶
- In terms of FID, this one is more unstable due as you can see random spikes here and there.
- The loss seems more compact and stable
- Due to instability in FID it was difficult to see the diversity of images but some classes like ships and horse somewhat resembled it.
More Improvements¶
Use of Embeddings¶
So far, our conditioning only targets the discriminator. To better condition the generator, we will be expanding on the idea of concatenation by using embeddings to condition the noise at latent layers.
Right now we are just concatenating once and moving one. This approach seems flawed to us because in a latent array of 128 element, it's hard to control the output using 10 elements at the end. Most likely the model will forget whatever label information was embedded.
Now we will try to mutliply the noise vector with the label embeddings. This is to ensure the label has a large impact of the intial state of the latent vector. So the difference between the model's latent dim when the label changes should be drastic so the subsequent steps will be different from each other.
Modifying the Generator Loss¶
By observing the generator, I found out that it was not really learning the labels correctly. If you think about it, we only backpropagate using the classifier labels in the discriminator. The discirminator gets good at identifying labels. But that information is barely passed to the generator. How will the generator know that this label is related to a certain output. To explicity pass that information to the generator, I propose that we modify the generator loss function to include a classifier loss.
What is a classifer loss? Just like in ACGAN, the discriminator backpropagtes using the classifier loss, we will use the same classifier loss to backpropagate the generator. So when we get predictions from a discriminator during the gen_step, we will get the class labels predicted by the discirminator. We will then compare these class labels to the labels that were passed to the generator to prompt it to generator a certain type of image.
This additional backpropagation helps the discriminator learn that this label input should produce a certain label output in the generator. So the generator will essentially learn to get the classes correct.
So how do we blend in this classifier loss. Well, I initially tried to simply add the regular generator loss and this new classifier loss. But the result I obtained were very poor. I played around the parameters and eventully found that using a weighted sum of the regular loss and the classifier loss worked best. Specifially a 70-30 weight in favour of the regular loss.
# Generator architecture
class EmbGenerator(nn.Module):
def __init__(self, latent_dim, hidden_dim):
super(EmbGenerator, self).__init__()
self.hidden_dim = hidden_dim
self.latent_dim = latent_dim
self.embed = nn.Embedding(NUM_CLASS, 128)
self.input_layers = nn.Sequential(
nn.Linear(latent_dim, hidden_dim),
nn.BatchNorm1d(hidden_dim),
nn.LeakyReLU(0.1, inplace=True)
)
self.conv_layers = nn.Sequential(
nn.ConvTranspose2d(int(hidden_dim/4), 128, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(128),
nn.LeakyReLU(True),
nn.ConvTranspose2d(128, 64, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(64),
nn.LeakyReLU(True),
nn.ConvTranspose2d(64, 32, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(32),
nn.LeakyReLU(True),
nn.ConvTranspose2d(32, CHANNELS, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(CHANNELS),
nn.LeakyReLU(True),
nn.Tanh()
)
def forward(self, noise, classes):
labels = self.embed(torch.argmax(classes, dim=1))
#inputs = torch.cat((labels, noise), 1)
inputs = torch.mul(labels, noise)
outputs = self.input_layers(inputs)
reshape_shape = int(self.hidden_dim/4)
outputs = torch.reshape(outputs, (outputs.size()[0], reshape_shape, 2, 2))
return self.conv_layers(outputs)
class EmbedACGAN(ACGAN):
def __init__(self, generator, discriminator, train_loader):
super().__init__(generator, discriminator, train_loader)
def gen_step(self,img,label):
self.g_opt.zero_grad()
img = img.to(device)
label = label.to(device)
noise = torch.normal(0, 1, (img.size()[0], self.generator.latent_dim), device=device)
fake_imgs = self.generator(noise, label)
fake_pred,classifier = self.discriminator(fake_imgs, label)
real_label = torch.ones((img.size()[0], 1), device=device)
aux_loss_fake = nn.CrossEntropyLoss()(classifier, label.float())
g_loss = self.loss(fake_pred, real_label)*0.7 + aux_loss_fake*0.3
g_loss.backward()
self.g_opt.step()
return g_loss.cpu().item()
gen_shared = EmbGenerator(128,1024).to(device)
ac_spectral = ACDiscriminator().to(device)
acgan_embed = EmbedACGAN(gen_shared,ac_spectral,train_loader)
acgan_embed.fit(201,train_loader)
plot_losses(201,[ (acgan_embed,"ACGAN Embed")])
acgan_embed.save("acgan_embed-500e")
torch.cuda.empty_cache()
Training EmbedACGAN for 201 Epochs
Training Epoch 1/201: 100%|██████████| 98/98 [00:05<00:00, 17.19it/s, disc_loss=0.613, gen_loss=1.47]
100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 120.8494644165039, KID: 0.12496890127658844
Training Epoch 2/201: 100%|██████████| 98/98 [00:05<00:00, 18.28it/s, disc_loss=0.623, gen_loss=1.47] Training Epoch 3/201: 100%|██████████| 98/98 [00:05<00:00, 17.77it/s, disc_loss=0.634, gen_loss=1.23] Training Epoch 4/201: 100%|██████████| 98/98 [00:05<00:00, 17.87it/s, disc_loss=0.566, gen_loss=1.28] Training Epoch 5/201: 100%|██████████| 98/98 [00:05<00:00, 16.69it/s, disc_loss=0.563, gen_loss=1.23] Training Epoch 6/201: 100%|██████████| 98/98 [00:05<00:00, 18.26it/s, disc_loss=0.569, gen_loss=1.21] Training Epoch 7/201: 100%|██████████| 98/98 [00:02<00:00, 34.86it/s, disc_loss=0.645, gen_loss=1.5] Training Epoch 8/201: 100%|██████████| 98/98 [00:03<00:00, 25.08it/s, disc_loss=0.572, gen_loss=1.37] Training Epoch 9/201: 100%|██████████| 98/98 [00:03<00:00, 25.92it/s, gen_loss=1.15] Training Epoch 10/201: 100%|██████████| 98/98 [00:04<00:00, 23.01it/s, gen_loss=1.24] Training Epoch 11/201: 100%|██████████| 98/98 [00:04<00:00, 23.91it/s, disc_loss=0.645, gen_loss=1.37] Training Epoch 12/201: 100%|██████████| 98/98 [00:02<00:00, 33.80it/s, disc_loss=0.578, gen_loss=1.43] Training Epoch 13/201: 100%|██████████| 98/98 [00:04<00:00, 24.45it/s, gen_loss=1.18] Training Epoch 14/201: 100%|██████████| 98/98 [00:03<00:00, 26.89it/s, disc_loss=0.654, gen_loss=1.4] Training Epoch 15/201: 100%|██████████| 98/98 [00:04<00:00, 23.35it/s, disc_loss=0.537, gen_loss=1.28] Training Epoch 16/201: 100%|██████████| 98/98 [00:04<00:00, 23.48it/s, disc_loss=0.765, gen_loss=1.36] Training Epoch 17/201: 100%|██████████| 98/98 [00:03<00:00, 30.72it/s, gen_loss=0.542] Training Epoch 18/201: 100%|██████████| 98/98 [00:03<00:00, 29.33it/s, disc_loss=0.728, gen_loss=1.41] Training Epoch 19/201: 100%|██████████| 98/98 [00:03<00:00, 25.65it/s, gen_loss=1.16] Training Epoch 20/201: 100%|██████████| 98/98 [00:03<00:00, 24.92it/s, gen_loss=1.2] Training Epoch 21/201: 100%|██████████| 98/98 [00:03<00:00, 25.32it/s, gen_loss=1.18] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 60.29642105102539, KID: 0.0544266514480114
Training Epoch 22/201: 100%|██████████| 98/98 [00:03<00:00, 26.50it/s, gen_loss=0.615] Training Epoch 23/201: 100%|██████████| 98/98 [00:03<00:00, 28.10it/s, disc_loss=0.528, gen_loss=1.31] Training Epoch 24/201: 100%|██████████| 98/98 [00:03<00:00, 27.02it/s, gen_loss=1.32] Training Epoch 25/201: 100%|██████████| 98/98 [00:03<00:00, 26.68it/s, disc_loss=0.491, gen_loss=1.31] Training Epoch 26/201: 100%|██████████| 98/98 [00:04<00:00, 23.87it/s, disc_loss=0.483, gen_loss=1.49] Training Epoch 27/201: 100%|██████████| 98/98 [00:04<00:00, 22.59it/s, gen_loss=0.893] Training Epoch 28/201: 100%|██████████| 98/98 [00:02<00:00, 36.40it/s, disc_loss=0.622, gen_loss=1.38] Training Epoch 29/201: 100%|██████████| 98/98 [00:03<00:00, 25.81it/s, disc_loss=0.743, gen_loss=1.49] Training Epoch 30/201: 100%|██████████| 98/98 [00:04<00:00, 23.44it/s, gen_loss=1.47] Training Epoch 31/201: 100%|██████████| 98/98 [00:03<00:00, 24.60it/s, disc_loss=0.685, gen_loss=1.19] Training Epoch 32/201: 100%|██████████| 98/98 [00:04<00:00, 22.91it/s, gen_loss=1.08] Training Epoch 33/201: 100%|██████████| 98/98 [00:02<00:00, 34.82it/s, gen_loss=0.5] Training Epoch 34/201: 100%|██████████| 98/98 [00:04<00:00, 24.02it/s, disc_loss=0.578, gen_loss=1.21] Training Epoch 35/201: 100%|██████████| 98/98 [00:04<00:00, 24.33it/s, disc_loss=0.55, gen_loss=1.33] Training Epoch 36/201: 100%|██████████| 98/98 [00:03<00:00, 25.85it/s, disc_loss=0.647, gen_loss=1.05] Training Epoch 37/201: 100%|██████████| 98/98 [00:04<00:00, 24.38it/s, gen_loss=1.18] Training Epoch 38/201: 100%|██████████| 98/98 [00:03<00:00, 29.90it/s, gen_loss=0.531] Training Epoch 39/201: 100%|██████████| 98/98 [00:03<00:00, 28.52it/s, disc_loss=0.537, gen_loss=1.25] Training Epoch 40/201: 100%|██████████| 98/98 [00:03<00:00, 25.61it/s, disc_loss=0.608, gen_loss=1.4] Training Epoch 41/201: 100%|██████████| 98/98 [00:04<00:00, 23.11it/s, gen_loss=1.07] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 61.54356384277344, KID: 0.05552015081048012
Training Epoch 42/201: 100%|██████████| 98/98 [00:04<00:00, 21.56it/s, gen_loss=1.35] Training Epoch 43/201: 100%|██████████| 98/98 [00:03<00:00, 24.81it/s, gen_loss=0.641] Training Epoch 44/201: 100%|██████████| 98/98 [00:03<00:00, 30.41it/s, disc_loss=0.528, gen_loss=1.36] Training Epoch 45/201: 100%|██████████| 98/98 [00:03<00:00, 25.32it/s, disc_loss=0.53, gen_loss=1.29] Training Epoch 46/201: 100%|██████████| 98/98 [00:03<00:00, 27.40it/s, disc_loss=0.524, gen_loss=1.47] Training Epoch 47/201: 100%|██████████| 98/98 [00:03<00:00, 27.62it/s, disc_loss=0.531, gen_loss=1.64] Training Epoch 48/201: 100%|██████████| 98/98 [00:04<00:00, 21.93it/s, gen_loss=1.25] Training Epoch 49/201: 100%|██████████| 98/98 [00:02<00:00, 33.95it/s, disc_loss=0.458, gen_loss=1.68] Training Epoch 50/201: 100%|██████████| 98/98 [00:03<00:00, 25.10it/s, gen_loss=1.14] Training Epoch 51/201: 100%|██████████| 98/98 [00:03<00:00, 26.39it/s, disc_loss=0.484, gen_loss=1.57]
Training Epoch 52/201: 100%|██████████| 98/98 [00:04<00:00, 24.46it/s, gen_loss=1.08] Training Epoch 53/201: 100%|██████████| 98/98 [00:04<00:00, 23.99it/s, disc_loss=0.474, gen_loss=1.39] Training Epoch 54/201: 100%|██████████| 98/98 [00:02<00:00, 35.20it/s, gen_loss=0.5] Training Epoch 55/201: 100%|██████████| 98/98 [00:03<00:00, 25.55it/s, disc_loss=0.583, gen_loss=1.27] Training Epoch 56/201: 100%|██████████| 98/98 [00:03<00:00, 24.70it/s, disc_loss=0.455, gen_loss=1.71] Training Epoch 57/201: 100%|██████████| 98/98 [00:03<00:00, 25.72it/s, gen_loss=0.893] Training Epoch 58/201: 100%|██████████| 98/98 [00:03<00:00, 24.51it/s, disc_loss=0.463, gen_loss=1.49] Training Epoch 59/201: 100%|██████████| 98/98 [00:03<00:00, 28.42it/s, gen_loss=0.593] Training Epoch 60/201: 100%|██████████| 98/98 [00:03<00:00, 29.22it/s, disc_loss=0.497, gen_loss=1.56] Training Epoch 61/201: 100%|██████████| 98/98 [00:03<00:00, 25.37it/s, gen_loss=1.02] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 92.02079010009766, KID: 0.09327535331249237
Training Epoch 62/201: 100%|██████████| 98/98 [00:03<00:00, 28.19it/s, gen_loss=1.39] Training Epoch 63/201: 100%|██████████| 98/98 [00:04<00:00, 22.97it/s, disc_loss=0.48, gen_loss=1.38] Training Epoch 64/201: 100%|██████████| 98/98 [00:03<00:00, 24.87it/s, gen_loss=0.805] Training Epoch 65/201: 100%|██████████| 98/98 [00:02<00:00, 35.24it/s, disc_loss=0.679, gen_loss=1.4] Training Epoch 66/201: 100%|██████████| 98/98 [00:03<00:00, 28.31it/s, disc_loss=0.658, gen_loss=1.34] Training Epoch 67/201: 100%|██████████| 98/98 [00:03<00:00, 25.57it/s, disc_loss=0.636, gen_loss=1.3] Training Epoch 68/201: 100%|██████████| 98/98 [00:04<00:00, 23.96it/s, gen_loss=0.863] Training Epoch 69/201: 100%|██████████| 98/98 [00:04<00:00, 21.90it/s, disc_loss=0.476, gen_loss=1.37] Training Epoch 70/201: 100%|██████████| 98/98 [00:02<00:00, 38.01it/s, gen_loss=1.01] Training Epoch 71/201: 100%|██████████| 98/98 [00:04<00:00, 22.90it/s, disc_loss=0.545, gen_loss=1.16] Training Epoch 72/201: 100%|██████████| 98/98 [00:03<00:00, 25.28it/s, gen_loss=1.03] Training Epoch 73/201: 100%|██████████| 98/98 [00:03<00:00, 26.68it/s, disc_loss=0.482, gen_loss=1.41] Training Epoch 74/201: 100%|██████████| 98/98 [00:03<00:00, 25.37it/s, disc_loss=0.586, gen_loss=1.39] Training Epoch 75/201: 100%|██████████| 98/98 [00:03<00:00, 26.87it/s, gen_loss=0.532] Training Epoch 76/201: 100%|██████████| 98/98 [00:03<00:00, 27.90it/s, gen_loss=0.753] Training Epoch 77/201: 100%|██████████| 98/98 [00:04<00:00, 23.87it/s, disc_loss=0.551, gen_loss=1.34] Training Epoch 78/201: 100%|██████████| 98/98 [00:03<00:00, 25.34it/s, gen_loss=1.33] Training Epoch 79/201: 100%|██████████| 98/98 [00:04<00:00, 23.58it/s, disc_loss=0.487, gen_loss=1.34] Training Epoch 80/201: 100%|██████████| 98/98 [00:03<00:00, 29.51it/s, gen_loss=0.571] Training Epoch 81/201: 100%|██████████| 98/98 [00:03<00:00, 28.17it/s, disc_loss=0.625, gen_loss=1.41] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 50.23191452026367, KID: 0.04436388984322548
Training Epoch 82/201: 100%|██████████| 98/98 [00:04<00:00, 20.38it/s, disc_loss=0.516, gen_loss=1.31] Training Epoch 83/201: 100%|██████████| 98/98 [00:03<00:00, 25.40it/s, gen_loss=1.41] Training Epoch 84/201: 100%|██████████| 98/98 [00:04<00:00, 24.02it/s, disc_loss=0.482, gen_loss=1.47] Training Epoch 85/201: 100%|██████████| 98/98 [00:04<00:00, 22.64it/s, gen_loss=1.04] Training Epoch 86/201: 100%|██████████| 98/98 [00:03<00:00, 30.18it/s, gen_loss=1.32] Training Epoch 87/201: 100%|██████████| 98/98 [00:03<00:00, 24.55it/s, disc_loss=0.508, gen_loss=1.41] Training Epoch 88/201: 100%|██████████| 98/98 [00:04<00:00, 23.23it/s, disc_loss=0.479, gen_loss=1.58] Training Epoch 89/201: 100%|██████████| 98/98 [00:03<00:00, 27.94it/s, gen_loss=1.06] Training Epoch 90/201: 100%|██████████| 98/98 [00:04<00:00, 23.38it/s, disc_loss=0.465, gen_loss=1.5] Training Epoch 91/201: 100%|██████████| 98/98 [00:02<00:00, 39.26it/s, disc_loss=0.464, gen_loss=1.92] Training Epoch 92/201: 100%|██████████| 98/98 [00:04<00:00, 22.38it/s, disc_loss=0.442, gen_loss=1.71] Training Epoch 93/201: 100%|██████████| 98/98 [00:03<00:00, 25.82it/s, disc_loss=0.484, gen_loss=1.51] Training Epoch 94/201: 100%|██████████| 98/98 [00:03<00:00, 29.46it/s, gen_loss=1.15] Training Epoch 95/201: 100%|██████████| 98/98 [00:03<00:00, 24.97it/s, disc_loss=0.73, gen_loss=1.51] Training Epoch 96/201: 100%|██████████| 98/98 [00:03<00:00, 30.21it/s, gen_loss=0.526] Training Epoch 97/201: 100%|██████████| 98/98 [00:04<00:00, 23.47it/s, disc_loss=0.487, gen_loss=1.54] Training Epoch 98/201: 100%|██████████| 98/98 [00:03<00:00, 26.21it/s, gen_loss=1.15] Training Epoch 99/201: 100%|██████████| 98/98 [00:04<00:00, 24.32it/s, disc_loss=0.508, gen_loss=1.44] Training Epoch 100/201: 100%|██████████| 98/98 [00:04<00:00, 22.48it/s, disc_loss=0.49, gen_loss=1.46] Training Epoch 101/201: 100%|██████████| 98/98 [00:03<00:00, 31.55it/s, gen_loss=0.636] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 53.39084243774414, KID: 0.04365163668990135
Training Epoch 102/201: 100%|██████████| 98/98 [00:03<00:00, 26.96it/s, gen_loss=0.818] Training Epoch 103/201: 100%|██████████| 98/98 [00:04<00:00, 21.64it/s, gen_loss=1.01] Training Epoch 104/201: 100%|██████████| 98/98 [00:03<00:00, 25.32it/s, gen_loss=1.14] Training Epoch 105/201: 100%|██████████| 98/98 [00:04<00:00, 23.51it/s, disc_loss=0.47, gen_loss=1.46] Training Epoch 106/201: 100%|██████████| 98/98 [00:04<00:00, 24.10it/s, disc_loss=0.598, gen_loss=1.52] Training Epoch 107/201: 100%|██████████| 98/98 [00:03<00:00, 28.63it/s, disc_loss=0.445, gen_loss=1.52] Training Epoch 108/201: 100%|██████████| 98/98 [00:04<00:00, 22.69it/s, gen_loss=1.87] Training Epoch 109/201: 100%|██████████| 98/98 [00:03<00:00, 24.81it/s, disc_loss=0.727, gen_loss=1.47] Training Epoch 110/201: 100%|██████████| 98/98 [00:04<00:00, 23.75it/s, disc_loss=0.437, gen_loss=1.59] Training Epoch 111/201: 100%|██████████| 98/98 [00:04<00:00, 22.19it/s, disc_loss=0.478, gen_loss=1.4] Training Epoch 112/201: 100%|██████████| 98/98 [00:02<00:00, 36.47it/s, gen_loss=0.513] Training Epoch 113/201: 100%|██████████| 98/98 [00:04<00:00, 22.14it/s, disc_loss=0.45, gen_loss=1.46] Training Epoch 114/201: 100%|██████████| 98/98 [00:03<00:00, 25.02it/s, disc_loss=0.463, gen_loss=1.53] Training Epoch 115/201: 100%|██████████| 98/98 [00:03<00:00, 26.92it/s, disc_loss=1.02, gen_loss=2.19] Training Epoch 116/201: 100%|██████████| 98/98 [00:03<00:00, 24.52it/s, gen_loss=1.02] Training Epoch 117/201: 100%|██████████| 98/98 [00:03<00:00, 30.22it/s, gen_loss=0.524] Training Epoch 118/201: 100%|██████████| 98/98 [00:04<00:00, 24.39it/s, disc_loss=0.433, gen_loss=1.64] Training Epoch 119/201: 100%|██████████| 98/98 [00:03<00:00, 25.53it/s, gen_loss=0.981] Training Epoch 120/201: 100%|██████████| 98/98 [00:03<00:00, 27.09it/s, disc_loss=0.554, gen_loss=1.15] Training Epoch 121/201: 100%|██████████| 98/98 [00:04<00:00, 23.19it/s, disc_loss=0.482, gen_loss=1.43] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 43.2374267578125, KID: 0.0353148989379406
Training Epoch 122/201: 100%|██████████| 98/98 [00:03<00:00, 26.84it/s, gen_loss=0.751] Training Epoch 123/201: 100%|██████████| 98/98 [00:03<00:00, 30.53it/s, disc_loss=0.458, gen_loss=1.36] Training Epoch 124/201: 100%|██████████| 98/98 [00:04<00:00, 23.39it/s, gen_loss=1.28] Training Epoch 125/201: 100%|██████████| 98/98 [00:03<00:00, 25.45it/s, disc_loss=0.553, gen_loss=1.07] Training Epoch 126/201: 100%|██████████| 98/98 [00:03<00:00, 26.19it/s, disc_loss=0.803, gen_loss=1.2] Training Epoch 127/201: 100%|██████████| 98/98 [00:04<00:00, 22.19it/s, disc_loss=0.456, gen_loss=1.53] Training Epoch 128/201: 100%|██████████| 98/98 [00:02<00:00, 44.27it/s, disc_loss=1.14, gen_loss=1.38] Training Epoch 129/201: 100%|██████████| 98/98 [00:04<00:00, 22.73it/s, gen_loss=1.13] Training Epoch 130/201: 100%|██████████| 98/98 [00:03<00:00, 26.51it/s, gen_loss=1.21] Training Epoch 131/201: 100%|██████████| 98/98 [00:03<00:00, 24.59it/s, disc_loss=0.451, gen_loss=1.38] Training Epoch 132/201: 100%|██████████| 98/98 [00:04<00:00, 23.57it/s, disc_loss=0.568, gen_loss=1.33] Training Epoch 133/201: 100%|██████████| 98/98 [00:02<00:00, 33.27it/s, gen_loss=0.535] Training Epoch 134/201: 100%|██████████| 98/98 [00:03<00:00, 26.34it/s, disc_loss=0.891, gen_loss=1.21] Training Epoch 135/201: 100%|██████████| 98/98 [00:03<00:00, 27.02it/s, gen_loss=1.04] Training Epoch 136/201: 100%|██████████| 98/98 [00:04<00:00, 22.85it/s, gen_loss=0.777] Training Epoch 137/201: 100%|██████████| 98/98 [00:04<00:00, 21.39it/s, disc_loss=0.45, gen_loss=1.49] Training Epoch 138/201: 100%|██████████| 98/98 [00:03<00:00, 28.40it/s, gen_loss=0.698] Training Epoch 139/201: 100%|██████████| 98/98 [00:03<00:00, 31.22it/s, gen_loss=1.65] Training Epoch 140/201: 100%|██████████| 98/98 [00:03<00:00, 26.25it/s, disc_loss=0.443, gen_loss=1.71] Training Epoch 141/201: 100%|██████████| 98/98 [00:03<00:00, 26.72it/s, disc_loss=0.717, gen_loss=1.25] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 42.34085464477539, KID: 0.03285668417811394
Training Epoch 142/201: 100%|██████████| 98/98 [00:04<00:00, 23.31it/s, disc_loss=0.431, gen_loss=1.47] Training Epoch 143/201: 100%|██████████| 98/98 [00:03<00:00, 25.03it/s, disc_loss=0.473, gen_loss=1.42] Training Epoch 144/201: 100%|██████████| 98/98 [00:02<00:00, 43.29it/s, disc_loss=0.448, gen_loss=1.38] Training Epoch 145/201: 100%|██████████| 98/98 [00:04<00:00, 23.59it/s, disc_loss=0.444, gen_loss=1.48] Training Epoch 146/201: 100%|██████████| 98/98 [00:03<00:00, 26.12it/s, gen_loss=1.15] Training Epoch 147/201: 100%|██████████| 98/98 [00:03<00:00, 26.17it/s, gen_loss=1.03] Training Epoch 148/201: 100%|██████████| 98/98 [00:04<00:00, 22.04it/s, disc_loss=0.532, gen_loss=1.35] Training Epoch 149/201: 100%|██████████| 98/98 [00:02<00:00, 34.39it/s, gen_loss=0.509] Training Epoch 150/201: 100%|██████████| 98/98 [00:03<00:00, 25.94it/s, disc_loss=0.446, gen_loss=1.73] Training Epoch 151/201: 100%|██████████| 98/98 [00:04<00:00, 24.09it/s, disc_loss=0.454, gen_loss=1.54]
Training Epoch 152/201: 100%|██████████| 98/98 [00:04<00:00, 23.35it/s, disc_loss=0.498, gen_loss=1.47] Training Epoch 153/201: 100%|██████████| 98/98 [00:04<00:00, 21.22it/s, gen_loss=0.963] Training Epoch 154/201: 100%|██████████| 98/98 [00:02<00:00, 34.70it/s, gen_loss=0.563] Training Epoch 155/201: 100%|██████████| 98/98 [00:03<00:00, 26.58it/s, gen_loss=0.993] Training Epoch 156/201: 100%|██████████| 98/98 [00:03<00:00, 26.16it/s, disc_loss=0.464, gen_loss=1.53] Training Epoch 157/201: 100%|██████████| 98/98 [00:03<00:00, 24.55it/s, disc_loss=0.459, gen_loss=1.57] Training Epoch 158/201: 100%|██████████| 98/98 [00:04<00:00, 21.61it/s, disc_loss=0.469, gen_loss=1.5] Training Epoch 159/201: 100%|██████████| 98/98 [00:04<00:00, 24.01it/s, gen_loss=1.09] Training Epoch 160/201: 100%|██████████| 98/98 [00:03<00:00, 31.18it/s, disc_loss=0.468, gen_loss=1.62] Training Epoch 161/201: 100%|██████████| 98/98 [00:05<00:00, 19.21it/s, disc_loss=0.518, gen_loss=1.23] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 41.674072265625, KID: 0.033479221165180206
Training Epoch 162/201: 100%|██████████| 98/98 [00:04<00:00, 22.85it/s, disc_loss=0.459, gen_loss=1.38] Training Epoch 163/201: 100%|██████████| 98/98 [00:04<00:00, 24.00it/s, disc_loss=0.478, gen_loss=1.29] Training Epoch 164/201: 100%|██████████| 98/98 [00:04<00:00, 20.48it/s, gen_loss=0.875] Training Epoch 165/201: 100%|██████████| 98/98 [00:02<00:00, 34.82it/s, gen_loss=0.484] Training Epoch 166/201: 100%|██████████| 98/98 [00:04<00:00, 22.16it/s, gen_loss=0.883] Training Epoch 167/201: 100%|██████████| 98/98 [00:04<00:00, 23.11it/s, disc_loss=0.483, gen_loss=1.77] Training Epoch 168/201: 100%|██████████| 98/98 [00:04<00:00, 22.55it/s, disc_loss=0.443, gen_loss=1.48] Training Epoch 169/201: 100%|██████████| 98/98 [00:04<00:00, 21.00it/s, disc_loss=0.488, gen_loss=1.36] Training Epoch 170/201: 100%|██████████| 98/98 [00:03<00:00, 28.15it/s, gen_loss=0.505] Training Epoch 171/201: 100%|██████████| 98/98 [00:03<00:00, 25.39it/s, gen_loss=0.756] Training Epoch 172/201: 100%|██████████| 98/98 [00:04<00:00, 22.63it/s, gen_loss=0.823] Training Epoch 173/201: 100%|██████████| 98/98 [00:04<00:00, 23.52it/s, gen_loss=0.816] Training Epoch 174/201: 100%|██████████| 98/98 [00:04<00:00, 20.21it/s, disc_loss=0.412, gen_loss=1.55] Training Epoch 175/201: 100%|██████████| 98/98 [00:03<00:00, 27.92it/s, gen_loss=0.793] Training Epoch 176/201: 100%|██████████| 98/98 [00:03<00:00, 29.54it/s, disc_loss=0.44, gen_loss=1.66] Training Epoch 177/201: 100%|██████████| 98/98 [00:04<00:00, 23.06it/s, gen_loss=0.974] Training Epoch 178/201: 100%|██████████| 98/98 [00:04<00:00, 23.63it/s, disc_loss=0.53, gen_loss=1.68] Training Epoch 179/201: 100%|██████████| 98/98 [00:04<00:00, 22.03it/s, disc_loss=0.461, gen_loss=1.44] Training Epoch 180/201: 100%|██████████| 98/98 [00:04<00:00, 23.21it/s, disc_loss=0.454, gen_loss=1.4] Training Epoch 181/201: 100%|██████████| 98/98 [00:02<00:00, 47.19it/s, disc_loss=1.43, gen_loss=1.61] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 46.608787536621094, KID: 0.0358121395111084
Training Epoch 182/201: 100%|██████████| 98/98 [00:05<00:00, 18.91it/s, disc_loss=0.442, gen_loss=1.42] Training Epoch 183/201: 100%|██████████| 98/98 [00:04<00:00, 22.32it/s, gen_loss=0.988] Training Epoch 184/201: 100%|██████████| 98/98 [00:03<00:00, 25.10it/s, gen_loss=0.956] Training Epoch 185/201: 100%|██████████| 98/98 [00:04<00:00, 24.08it/s, disc_loss=0.445, gen_loss=1.4] Training Epoch 186/201: 100%|██████████| 98/98 [00:02<00:00, 38.67it/s, gen_loss=0.52] Training Epoch 187/201: 100%|██████████| 98/98 [00:04<00:00, 24.39it/s, gen_loss=1.64] Training Epoch 188/201: 100%|██████████| 98/98 [00:04<00:00, 22.77it/s, disc_loss=0.443, gen_loss=1.58] Training Epoch 189/201: 100%|██████████| 98/98 [00:03<00:00, 24.81it/s, disc_loss=0.615, gen_loss=1.37] Training Epoch 190/201: 100%|██████████| 98/98 [00:04<00:00, 20.70it/s, gen_loss=1.08] Training Epoch 191/201: 100%|██████████| 98/98 [00:03<00:00, 32.10it/s, gen_loss=0.769] Training Epoch 192/201: 100%|██████████| 98/98 [00:03<00:00, 27.61it/s, disc_loss=0.442, gen_loss=1.46] Training Epoch 193/201: 100%|██████████| 98/98 [00:04<00:00, 23.28it/s, gen_loss=1.73] Training Epoch 194/201: 100%|██████████| 98/98 [00:04<00:00, 22.63it/s, disc_loss=0.414, gen_loss=1.5] Training Epoch 195/201: 100%|██████████| 98/98 [00:03<00:00, 25.26it/s, disc_loss=0.411, gen_loss=1.53] Training Epoch 196/201: 100%|██████████| 98/98 [00:03<00:00, 25.42it/s, disc_loss=0.511, gen_loss=1.46] Training Epoch 197/201: 100%|██████████| 98/98 [00:02<00:00, 35.75it/s, disc_loss=0.468, gen_loss=1.64] Training Epoch 198/201: 100%|██████████| 98/98 [00:04<00:00, 20.85it/s, disc_loss=0.449, gen_loss=1.72] Training Epoch 199/201: 100%|██████████| 98/98 [00:04<00:00, 21.80it/s, disc_loss=0.446, gen_loss=1.48] Training Epoch 200/201: 100%|██████████| 98/98 [00:03<00:00, 24.80it/s, disc_loss=0.573, gen_loss=0.919] Training Epoch 201/201: 100%|██████████| 98/98 [00:04<00:00, 20.59it/s, gen_loss=1.77] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 30.604801177978516, KID: 0.02268698811531067
<Figure size 1200x600 with 0 Axes>
Observations:¶
- The FID seemed to improve alot and the images seemed to fit the class more
- Ideas of using embeddings seems promising
- Training is relatively stable with few collapses here and there.
DiffAugment¶
As explained in an earlier section, using diff augment has been shown by researchers to greatly improve FID scores. We will utilise the Kornia library to modify the Discriminator. We will be applying Augmentation to all data that enters the discriminator in line with the "Augment Everywhere" approach.
augment = nn.Sequential(
K.ColorJitter(brightness=0.2, contrast=0.2, saturation=0.2, hue=0.2, p=0.5),
K.RandomAffine(degrees=0, translate=(0.2, 0.2), scale=(1.0, 1.0), p=0.5),
K.RandomErasing(scale=(0.1, 0.2), ratio=(0.3, 3.3), same_on_batch=False, p=0.5)
)
class ACAugDiscriminator(nn.Module):
def __init__(self):
super(ACAugDiscriminator, self).__init__()
self.conv_layers = nn.Sequential(
nn.utils.spectral_norm(nn.Conv2d(CHANNELS, 32, kernel_size=4, stride=2, padding=1)),
nn.BatchNorm2d(32),
nn.LeakyReLU(0.1, inplace=True),
nn.utils.spectral_norm(nn.Conv2d(32, 64, kernel_size=4, stride=2, padding=1)),
nn.BatchNorm2d(64),
nn.LeakyReLU(0.1, inplace=True),
nn.utils.spectral_norm(nn.Conv2d(64, 128, kernel_size=4, stride=2, padding=1)),
nn.BatchNorm2d(128),
nn.LeakyReLU(0.1, inplace=True),
nn.utils.spectral_norm(nn.Conv2d(128, 256, kernel_size=4, stride=2, padding=1)),
nn.BatchNorm2d(256),
nn.LeakyReLU(0.1, inplace=True),
nn.AvgPool2d(2, stride=2)
)
self.output_layers = nn.Sequential(
nn.Linear(256, 512),
nn.LeakyReLU(0.1, inplace=True),
nn.Linear(512, 1),
nn.Sigmoid()
)
self.classifier = nn.Sequential(
nn.Linear(256, 512),
nn.LeakyReLU(0.1, inplace=True),
nn.Linear(512, NUM_CLASS),
nn.Softmax()
)
if augment is not None:
self.augment = augment
else:
self.augment = nn.Identity()
def forward(self, x,label=None, aug=True):
if self.training and aug==True:
x = self.augment(x)
#labels = self.embed(torch.argmax(labels,axis=1))
output = self.conv_layers(x).squeeze()
#x = torch.cat((output,labels), dim=1)
f = self.output_layers(output)
c = self.classifier(output)
return f,c
removing r1 loss since we are using DiffAugment.
class DiffGAN(EmbedACGAN):
def __init__(self, generator, discriminator, train_loader):
super().__init__(generator, discriminator, train_loader)
def disc_step(self,img,label):
self.d_opt.zero_grad()
img = img.to(device)
label = label.to(device)
img.requires_grad = True
# Discriminator Loss
noise = torch.normal(0, 1, (img.size()[0], self.generator.latent_dim), device=device)
fake_imgs = self.generator(noise, label)
fake_pred, label_pred_fake = self.discriminator(fake_imgs)
real_pred, label_pred_real = self.discriminator(img)
fake_label = smooth_labels(torch.zeros((img.size()[0], 1), device=device))
real_label = smooth_labels(torch.ones((img.size()[0], 1), device=device))
d_loss = (self.loss(fake_pred, fake_label) + self.loss(real_pred, real_label)) / 2
d_loss.backward()
# Classifer Loss
noise = torch.normal(0, 1, (img.size()[0], self.generator.latent_dim), device=device)
fake_imgs = self.generator(noise, label)
fake_pred, label_pred_fake = self.discriminator(fake_imgs)
real_pred, label_pred_real = self.discriminator(img)
aux_loss_fake = nn.CrossEntropyLoss()(label_pred_fake, label.float())
aux_loss_real = nn.CrossEntropyLoss()(label_pred_real, label.float())
aux_loss = (aux_loss_fake + aux_loss_real) / 2
aux_loss.backward()
self.d_opt.step()
return d_loss.cpu().item()
I will be training this for 200 epochs only since the model takes 5x longer than previous models to train. I can't compare them directly cuz the epochs are different.
gen_shared = EmbGenerator(128,1024).to(device)
diff_disc = ACAugDiscriminator().to(device)
diffgan = DiffGAN(gen_shared,diff_disc,train_loader)
diffgan.fit(201,train_loader)
plot_losses(201,[(diffgan,"DiffAugGAN")])
diffgan.save("diffgan-200e")
torch.cuda.empty_cache()
Training DiffGAN for 201 Epochs
Training Epoch 1/201: 100%|██████████| 98/98 [00:21<00:00, 4.67it/s, disc_loss=0.545, gen_loss=1.44]
100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 130.06069946289062, KID: 0.13312369585037231
Training Epoch 2/201: 100%|██████████| 98/98 [00:20<00:00, 4.70it/s, disc_loss=0.565, gen_loss=1.37] Training Epoch 3/201: 100%|██████████| 98/98 [00:20<00:00, 4.69it/s, disc_loss=0.65, gen_loss=1.39] Training Epoch 4/201: 100%|██████████| 98/98 [00:21<00:00, 4.59it/s, disc_loss=0.5, gen_loss=1.42] Training Epoch 5/201: 100%|██████████| 98/98 [00:21<00:00, 4.65it/s, disc_loss=0.51, gen_loss=1.22] Training Epoch 6/201: 100%|██████████| 98/98 [00:21<00:00, 4.64it/s, disc_loss=0.484, gen_loss=1.39] Training Epoch 7/201: 100%|██████████| 98/98 [00:10<00:00, 8.96it/s, disc_loss=0.496, gen_loss=1.36] Training Epoch 8/201: 100%|██████████| 98/98 [00:14<00:00, 6.65it/s, disc_loss=0.558, gen_loss=1.38] Training Epoch 9/201: 100%|██████████| 98/98 [00:15<00:00, 6.24it/s, disc_loss=0.463, gen_loss=1.42] Training Epoch 10/201: 100%|██████████| 98/98 [00:20<00:00, 4.78it/s, disc_loss=0.564, gen_loss=1.12] Training Epoch 11/201: 100%|██████████| 98/98 [00:21<00:00, 4.67it/s, disc_loss=0.477, gen_loss=1.68] Training Epoch 12/201: 100%|██████████| 98/98 [00:13<00:00, 7.39it/s, gen_loss=0.771] Training Epoch 13/201: 100%|██████████| 98/98 [00:15<00:00, 6.24it/s, gen_loss=1.64] Training Epoch 14/201: 100%|██████████| 98/98 [00:14<00:00, 6.96it/s, gen_loss=1.42] Training Epoch 15/201: 100%|██████████| 98/98 [00:17<00:00, 5.48it/s, disc_loss=0.495, gen_loss=1.33] Training Epoch 16/201: 100%|██████████| 98/98 [00:19<00:00, 5.11it/s, disc_loss=0.486, gen_loss=1.62] Training Epoch 17/201: 100%|██████████| 98/98 [00:16<00:00, 5.89it/s, gen_loss=0.749] Training Epoch 18/201: 100%|██████████| 98/98 [00:13<00:00, 7.29it/s, disc_loss=0.424, gen_loss=1.67] Training Epoch 19/201: 100%|██████████| 98/98 [00:15<00:00, 6.13it/s, disc_loss=0.662, gen_loss=2.04] Training Epoch 20/201: 100%|██████████| 98/98 [00:16<00:00, 5.79it/s, disc_loss=0.51, gen_loss=1.6] Training Epoch 21/201: 100%|██████████| 98/98 [00:19<00:00, 5.12it/s, disc_loss=0.435, gen_loss=1.98] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 58.730560302734375, KID: 0.04517936706542969
Training Epoch 22/201: 100%|██████████| 98/98 [00:20<00:00, 4.76it/s, gen_loss=0.977] Training Epoch 23/201: 100%|██████████| 98/98 [00:11<00:00, 8.37it/s, disc_loss=0.46, gen_loss=1.92] Training Epoch 24/201: 100%|██████████| 98/98 [00:16<00:00, 5.94it/s, gen_loss=1.28] Training Epoch 25/201: 100%|██████████| 98/98 [00:13<00:00, 7.04it/s, gen_loss=1.62] Training Epoch 26/201: 100%|██████████| 98/98 [00:15<00:00, 6.22it/s, gen_loss=1.56] Training Epoch 27/201: 100%|██████████| 98/98 [00:18<00:00, 5.27it/s, disc_loss=0.424, gen_loss=1.74] Training Epoch 28/201: 100%|██████████| 98/98 [00:10<00:00, 9.20it/s, disc_loss=0.521, gen_loss=1.61] Training Epoch 29/201: 100%|██████████| 98/98 [00:15<00:00, 6.20it/s, disc_loss=0.571, gen_loss=1.48] Training Epoch 30/201: 100%|██████████| 98/98 [00:15<00:00, 6.49it/s, disc_loss=0.515, gen_loss=1.49] Training Epoch 31/201: 100%|██████████| 98/98 [00:18<00:00, 5.25it/s, gen_loss=1.24] Training Epoch 32/201: 100%|██████████| 98/98 [00:15<00:00, 6.44it/s, gen_loss=1.82] Training Epoch 33/201: 100%|██████████| 98/98 [00:13<00:00, 7.17it/s, gen_loss=0.802] Training Epoch 34/201: 100%|██████████| 98/98 [00:15<00:00, 6.43it/s, gen_loss=1.1] Training Epoch 35/201: 100%|██████████| 98/98 [00:14<00:00, 6.63it/s, disc_loss=0.439, gen_loss=1.57] Training Epoch 36/201: 100%|██████████| 98/98 [00:16<00:00, 5.91it/s, disc_loss=0.512, gen_loss=2.47] Training Epoch 37/201: 100%|██████████| 98/98 [00:16<00:00, 6.04it/s, gen_loss=1.67] Training Epoch 38/201: 100%|██████████| 98/98 [00:16<00:00, 5.78it/s, gen_loss=0.812] Training Epoch 39/201: 100%|██████████| 98/98 [00:12<00:00, 7.92it/s, disc_loss=0.438, gen_loss=1.62] Training Epoch 40/201: 100%|██████████| 98/98 [00:14<00:00, 6.55it/s, disc_loss=0.464, gen_loss=1.31] Training Epoch 41/201: 100%|██████████| 98/98 [00:17<00:00, 5.45it/s, disc_loss=0.486, gen_loss=1.78] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 41.49292755126953, KID: 0.03215952217578888
Training Epoch 42/201: 100%|██████████| 98/98 [00:16<00:00, 5.87it/s, disc_loss=0.474, gen_loss=2.02] Training Epoch 43/201: 100%|██████████| 98/98 [00:17<00:00, 5.68it/s, disc_loss=0.419, gen_loss=1.42] Training Epoch 44/201: 100%|██████████| 98/98 [00:10<00:00, 8.99it/s, disc_loss=0.581, gen_loss=1.82] Training Epoch 45/201: 100%|██████████| 98/98 [00:13<00:00, 7.33it/s, gen_loss=2.08] Training Epoch 46/201: 100%|██████████| 98/98 [00:13<00:00, 7.20it/s, disc_loss=0.358, gen_loss=2.34] Training Epoch 47/201: 100%|██████████| 98/98 [00:12<00:00, 7.66it/s, gen_loss=1.77] Training Epoch 48/201: 100%|██████████| 98/98 [00:12<00:00, 7.87it/s, disc_loss=0.671, gen_loss=1.67] Training Epoch 49/201: 100%|██████████| 98/98 [00:12<00:00, 8.15it/s, gen_loss=0.708] Training Epoch 50/201: 100%|██████████| 98/98 [00:16<00:00, 6.06it/s, gen_loss=1.86] Training Epoch 51/201: 100%|██████████| 98/98 [00:11<00:00, 8.22it/s, gen_loss=1.75]
Training Epoch 52/201: 100%|██████████| 98/98 [00:14<00:00, 6.59it/s, disc_loss=0.409, gen_loss=1.92] Training Epoch 53/201: 100%|██████████| 98/98 [00:14<00:00, 6.89it/s, gen_loss=1.51] Training Epoch 54/201: 100%|██████████| 98/98 [00:13<00:00, 7.10it/s, gen_loss=0.756] Training Epoch 55/201: 100%|██████████| 98/98 [00:12<00:00, 7.79it/s, disc_loss=0.364, gen_loss=2.08] Training Epoch 56/201: 100%|██████████| 98/98 [00:13<00:00, 7.24it/s, disc_loss=0.445, gen_loss=1.59] Training Epoch 57/201: 100%|██████████| 98/98 [00:15<00:00, 6.41it/s, gen_loss=1.11] Training Epoch 58/201: 100%|██████████| 98/98 [00:15<00:00, 6.36it/s, gen_loss=1.1] Training Epoch 59/201: 100%|██████████| 98/98 [00:15<00:00, 6.50it/s, gen_loss=1.19] Training Epoch 60/201: 100%|██████████| 98/98 [00:10<00:00, 9.18it/s, gen_loss=1.11] Training Epoch 61/201: 100%|██████████| 98/98 [00:16<00:00, 6.12it/s, gen_loss=1.5] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 45.41862869262695, KID: 0.03369811922311783
Training Epoch 62/201: 100%|██████████| 98/98 [00:13<00:00, 7.07it/s, disc_loss=0.372, gen_loss=2.15] Training Epoch 63/201: 100%|██████████| 98/98 [00:16<00:00, 5.80it/s, disc_loss=0.415, gen_loss=1.81] Training Epoch 64/201: 100%|██████████| 98/98 [00:16<00:00, 5.96it/s, disc_loss=0.408, gen_loss=1.58] Training Epoch 65/201: 100%|██████████| 98/98 [00:11<00:00, 8.19it/s, gen_loss=0.699] Training Epoch 66/201: 100%|██████████| 98/98 [00:15<00:00, 6.17it/s, disc_loss=0.4, gen_loss=1.99] Training Epoch 67/201: 100%|██████████| 98/98 [00:15<00:00, 6.19it/s, disc_loss=0.423, gen_loss=1.77] Training Epoch 68/201: 100%|██████████| 98/98 [00:16<00:00, 5.99it/s, disc_loss=0.381, gen_loss=1.7] Training Epoch 69/201: 100%|██████████| 98/98 [00:13<00:00, 7.08it/s, disc_loss=0.499, gen_loss=1.72] Training Epoch 70/201: 100%|██████████| 98/98 [00:15<00:00, 6.25it/s, gen_loss=0.71] Training Epoch 71/201: 100%|██████████| 98/98 [00:14<00:00, 6.83it/s, disc_loss=0.377, gen_loss=1.6] Training Epoch 72/201: 100%|██████████| 98/98 [00:16<00:00, 5.86it/s, disc_loss=0.43, gen_loss=1.82] Training Epoch 73/201: 100%|██████████| 98/98 [00:15<00:00, 6.19it/s, disc_loss=0.42, gen_loss=1.49] Training Epoch 74/201: 100%|██████████| 98/98 [00:15<00:00, 6.16it/s, disc_loss=0.571, gen_loss=2.5] Training Epoch 75/201: 100%|██████████| 98/98 [00:17<00:00, 5.74it/s, gen_loss=1.03] Training Epoch 76/201: 100%|██████████| 98/98 [00:09<00:00, 10.53it/s, gen_loss=1.64] Training Epoch 77/201: 100%|██████████| 98/98 [00:14<00:00, 6.76it/s, gen_loss=1.15] Training Epoch 78/201: 100%|██████████| 98/98 [00:13<00:00, 7.43it/s, disc_loss=0.503, gen_loss=1.4] Training Epoch 79/201: 100%|██████████| 98/98 [00:14<00:00, 6.78it/s, disc_loss=0.436, gen_loss=2.35] Training Epoch 80/201: 100%|██████████| 98/98 [00:12<00:00, 7.89it/s, disc_loss=0.35, gen_loss=2.35] Training Epoch 81/201: 100%|██████████| 98/98 [00:10<00:00, 9.20it/s, gen_loss=0.728] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 48.75973129272461, KID: 0.03827647864818573
Training Epoch 82/201: 100%|██████████| 98/98 [00:12<00:00, 8.05it/s, disc_loss=0.412, gen_loss=2.03] Training Epoch 83/201: 100%|██████████| 98/98 [00:14<00:00, 6.95it/s, gen_loss=1.58] Training Epoch 84/201: 100%|██████████| 98/98 [00:14<00:00, 6.70it/s, gen_loss=1.28] Training Epoch 85/201: 100%|██████████| 98/98 [00:17<00:00, 5.75it/s, gen_loss=1.59] Training Epoch 86/201: 100%|██████████| 98/98 [00:13<00:00, 7.41it/s, gen_loss=0.747] Training Epoch 87/201: 100%|██████████| 98/98 [00:12<00:00, 7.55it/s, disc_loss=0.374, gen_loss=1.84] Training Epoch 88/201: 100%|██████████| 98/98 [00:14<00:00, 6.87it/s, disc_loss=0.453, gen_loss=1.45] Training Epoch 89/201: 100%|██████████| 98/98 [00:13<00:00, 7.24it/s, gen_loss=1] Training Epoch 90/201: 100%|██████████| 98/98 [00:16<00:00, 6.12it/s, gen_loss=1.6] Training Epoch 91/201: 100%|██████████| 98/98 [00:15<00:00, 6.37it/s, gen_loss=1.11] Training Epoch 92/201: 100%|██████████| 98/98 [00:10<00:00, 9.38it/s, disc_loss=0.428, gen_loss=1.63] Training Epoch 93/201: 100%|██████████| 98/98 [00:16<00:00, 6.06it/s, gen_loss=1.38] Training Epoch 94/201: 100%|██████████| 98/98 [00:12<00:00, 7.60it/s, disc_loss=0.403, gen_loss=1.91] Training Epoch 95/201: 100%|██████████| 98/98 [00:16<00:00, 5.97it/s, disc_loss=0.409, gen_loss=2.05] Training Epoch 96/201: 100%|██████████| 98/98 [00:16<00:00, 5.77it/s, disc_loss=0.359, gen_loss=1.97] Training Epoch 97/201: 100%|██████████| 98/98 [00:11<00:00, 8.45it/s, gen_loss=0.702] Training Epoch 98/201: 100%|██████████| 98/98 [00:12<00:00, 7.71it/s, gen_loss=1.2] Training Epoch 99/201: 100%|██████████| 98/98 [00:12<00:00, 7.61it/s, gen_loss=1.56] Training Epoch 100/201: 100%|██████████| 98/98 [00:14<00:00, 6.54it/s, gen_loss=1.53] Training Epoch 101/201: 100%|██████████| 98/98 [00:16<00:00, 6.04it/s, gen_loss=1.26] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 36.560211181640625, KID: 0.027411479502916336
Training Epoch 102/201: 100%|██████████| 98/98 [00:13<00:00, 7.47it/s, gen_loss=1.57] Training Epoch 103/201: 100%|██████████| 98/98 [00:09<00:00, 10.75it/s, gen_loss=0.994] Training Epoch 104/201: 100%|██████████| 98/98 [00:09<00:00, 10.08it/s, gen_loss=2.69] Training Epoch 105/201: 100%|██████████| 98/98 [00:09<00:00, 10.78it/s, gen_loss=1.31] Training Epoch 106/201: 100%|██████████| 98/98 [00:11<00:00, 8.25it/s, disc_loss=0.361, gen_loss=2.73] Training Epoch 107/201: 100%|██████████| 98/98 [00:10<00:00, 9.19it/s, disc_loss=0.342, gen_loss=2.15] Training Epoch 108/201: 100%|██████████| 98/98 [00:09<00:00, 9.94it/s, gen_loss=1.81] Training Epoch 109/201: 100%|██████████| 98/98 [00:09<00:00, 10.77it/s, gen_loss=1.44] Training Epoch 110/201: 100%|██████████| 98/98 [00:09<00:00, 10.68it/s, gen_loss=1.2] Training Epoch 111/201: 100%|██████████| 98/98 [00:09<00:00, 10.75it/s, gen_loss=1.1] Training Epoch 112/201: 100%|██████████| 98/98 [00:09<00:00, 10.26it/s, gen_loss=2.77] Training Epoch 113/201: 100%|██████████| 98/98 [00:08<00:00, 10.94it/s, gen_loss=1.94] Training Epoch 114/201: 100%|██████████| 98/98 [00:09<00:00, 10.14it/s, gen_loss=2.71] Training Epoch 115/201: 100%|██████████| 98/98 [00:08<00:00, 10.90it/s, gen_loss=1.49] Training Epoch 116/201: 100%|██████████| 98/98 [00:09<00:00, 10.75it/s, gen_loss=1.35] Training Epoch 117/201: 100%|██████████| 98/98 [00:09<00:00, 10.88it/s, gen_loss=0.992] Training Epoch 118/201: 100%|██████████| 98/98 [00:16<00:00, 6.02it/s, disc_loss=0.339, gen_loss=2.1] Training Epoch 119/201: 100%|██████████| 98/98 [00:09<00:00, 10.63it/s, gen_loss=1.7] Training Epoch 120/201: 100%|██████████| 98/98 [00:09<00:00, 10.81it/s, gen_loss=1.8] Training Epoch 121/201: 100%|██████████| 98/98 [00:09<00:00, 10.77it/s, gen_loss=1.56] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 299.370849609375, KID: 0.41113391518592834
Training Epoch 122/201: 100%|██████████| 98/98 [00:09<00:00, 10.44it/s, gen_loss=2.02] Training Epoch 123/201: 100%|██████████| 98/98 [00:09<00:00, 10.72it/s, gen_loss=1.78] Training Epoch 124/201: 100%|██████████| 98/98 [00:09<00:00, 10.79it/s, gen_loss=1.39] Training Epoch 125/201: 100%|██████████| 98/98 [00:09<00:00, 10.79it/s, gen_loss=1.29] Training Epoch 126/201: 100%|██████████| 98/98 [00:09<00:00, 10.86it/s, gen_loss=1.29] Training Epoch 127/201: 100%|██████████| 98/98 [00:09<00:00, 10.80it/s, gen_loss=1.34] Training Epoch 128/201: 100%|██████████| 98/98 [00:10<00:00, 9.43it/s, gen_loss=2.08] Training Epoch 129/201: 100%|██████████| 98/98 [00:09<00:00, 10.44it/s, gen_loss=2.07] Training Epoch 130/201: 100%|██████████| 98/98 [00:08<00:00, 10.90it/s, gen_loss=1.89] Training Epoch 131/201: 100%|██████████| 98/98 [00:09<00:00, 10.84it/s, gen_loss=1.78] Training Epoch 132/201: 100%|██████████| 98/98 [00:09<00:00, 10.71it/s, gen_loss=1.78] Training Epoch 133/201: 100%|██████████| 98/98 [00:08<00:00, 10.89it/s, gen_loss=1.46] Training Epoch 134/201: 100%|██████████| 98/98 [00:09<00:00, 10.78it/s, gen_loss=1.58] Training Epoch 135/201: 100%|██████████| 98/98 [00:09<00:00, 10.74it/s, gen_loss=1.36] Training Epoch 136/201: 100%|██████████| 98/98 [00:09<00:00, 10.77it/s, gen_loss=1.33] Training Epoch 137/201: 100%|██████████| 98/98 [00:09<00:00, 10.54it/s, gen_loss=1.29] Training Epoch 138/201: 100%|██████████| 98/98 [00:09<00:00, 10.78it/s, gen_loss=1.24] Training Epoch 139/201: 100%|██████████| 98/98 [00:09<00:00, 10.68it/s, gen_loss=2.65] Training Epoch 140/201: 100%|██████████| 98/98 [00:08<00:00, 10.96it/s, gen_loss=2.35] Training Epoch 141/201: 100%|██████████| 98/98 [00:09<00:00, 10.81it/s, gen_loss=2.17] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 291.2506408691406, KID: 0.41043558716773987
Training Epoch 142/201: 100%|██████████| 98/98 [00:09<00:00, 10.63it/s, gen_loss=1.93] Training Epoch 143/201: 100%|██████████| 98/98 [00:09<00:00, 10.79it/s, gen_loss=1.71] Training Epoch 144/201: 100%|██████████| 98/98 [00:09<00:00, 10.49it/s, gen_loss=1.54] Training Epoch 145/201: 100%|██████████| 98/98 [00:09<00:00, 10.81it/s, gen_loss=1.19] Training Epoch 146/201: 100%|██████████| 98/98 [00:08<00:00, 10.89it/s, gen_loss=0.931] Training Epoch 147/201: 100%|██████████| 98/98 [00:15<00:00, 6.32it/s, gen_loss=2.28] Training Epoch 148/201: 100%|██████████| 98/98 [00:09<00:00, 10.66it/s, gen_loss=1.74] Training Epoch 149/201: 100%|██████████| 98/98 [00:09<00:00, 10.78it/s, gen_loss=1.51] Training Epoch 150/201: 100%|██████████| 98/98 [00:09<00:00, 10.61it/s, gen_loss=2.37] Training Epoch 151/201: 100%|██████████| 98/98 [00:09<00:00, 10.77it/s, gen_loss=2.32]
Training Epoch 152/201: 100%|██████████| 98/98 [00:09<00:00, 10.83it/s, gen_loss=1.96] Training Epoch 153/201: 100%|██████████| 98/98 [00:09<00:00, 10.67it/s, gen_loss=1.59] Training Epoch 154/201: 100%|██████████| 98/98 [00:09<00:00, 10.80it/s, gen_loss=1.42] Training Epoch 155/201: 100%|██████████| 98/98 [00:09<00:00, 10.86it/s, gen_loss=1.35] Training Epoch 156/201: 100%|██████████| 98/98 [00:09<00:00, 10.80it/s, gen_loss=1.37] Training Epoch 157/201: 100%|██████████| 98/98 [00:09<00:00, 10.54it/s, gen_loss=1.52] Training Epoch 158/201: 100%|██████████| 98/98 [00:09<00:00, 10.77it/s, gen_loss=1.43] Training Epoch 159/201: 100%|██████████| 98/98 [00:09<00:00, 10.89it/s, gen_loss=1.45] Training Epoch 160/201: 100%|██████████| 98/98 [00:09<00:00, 10.59it/s, gen_loss=2.46] Training Epoch 161/201: 100%|██████████| 98/98 [00:09<00:00, 10.69it/s, gen_loss=2.24] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 290.7508544921875, KID: 0.40609511733055115
Training Epoch 162/201: 100%|██████████| 98/98 [00:09<00:00, 10.66it/s, gen_loss=5.41] Training Epoch 163/201: 100%|██████████| 98/98 [00:09<00:00, 10.76it/s, gen_loss=4.09] Training Epoch 164/201: 100%|██████████| 98/98 [00:09<00:00, 10.68it/s, gen_loss=3.47] Training Epoch 165/201: 100%|██████████| 98/98 [00:09<00:00, 10.74it/s, gen_loss=2.6] Training Epoch 166/201: 100%|██████████| 98/98 [00:09<00:00, 10.80it/s, gen_loss=2.59] Training Epoch 167/201: 100%|██████████| 98/98 [00:08<00:00, 10.97it/s, gen_loss=2.31] Training Epoch 168/201: 100%|██████████| 98/98 [00:09<00:00, 10.79it/s, gen_loss=1.95] Training Epoch 169/201: 100%|██████████| 98/98 [00:08<00:00, 10.90it/s, gen_loss=2.34] Training Epoch 170/201: 100%|██████████| 98/98 [00:09<00:00, 10.76it/s, gen_loss=2.15] Training Epoch 171/201: 100%|██████████| 98/98 [00:09<00:00, 10.82it/s, gen_loss=4.2] Training Epoch 172/201: 100%|██████████| 98/98 [00:09<00:00, 10.62it/s, gen_loss=3.69] Training Epoch 173/201: 100%|██████████| 98/98 [00:08<00:00, 10.89it/s, gen_loss=3.02] Training Epoch 174/201: 100%|██████████| 98/98 [00:09<00:00, 10.78it/s, gen_loss=2.04] Training Epoch 175/201: 100%|██████████| 98/98 [00:09<00:00, 10.79it/s, gen_loss=2.42] Training Epoch 176/201: 100%|██████████| 98/98 [00:09<00:00, 10.86it/s, gen_loss=2.36] Training Epoch 177/201: 100%|██████████| 98/98 [00:09<00:00, 10.80it/s, gen_loss=2.55] Training Epoch 178/201: 100%|██████████| 98/98 [00:09<00:00, 10.71it/s, gen_loss=2.08] Training Epoch 179/201: 100%|██████████| 98/98 [00:09<00:00, 10.59it/s, gen_loss=2.02] Training Epoch 180/201: 100%|██████████| 98/98 [00:09<00:00, 10.85it/s, gen_loss=2.49] Training Epoch 181/201: 100%|██████████| 98/98 [00:09<00:00, 10.69it/s, gen_loss=3.08] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 372.57403564453125, KID: 0.6114421486854553
Training Epoch 182/201: 100%|██████████| 98/98 [00:09<00:00, 10.78it/s, gen_loss=1.97] Training Epoch 183/201: 100%|██████████| 98/98 [00:09<00:00, 10.78it/s, gen_loss=1.93] Training Epoch 184/201: 100%|██████████| 98/98 [00:09<00:00, 10.67it/s, gen_loss=1.71] Training Epoch 185/201: 100%|██████████| 98/98 [00:09<00:00, 10.86it/s, gen_loss=2.07] Training Epoch 186/201: 100%|██████████| 98/98 [00:09<00:00, 10.59it/s, gen_loss=1.68] Training Epoch 187/201: 100%|██████████| 98/98 [00:09<00:00, 10.87it/s, gen_loss=1.62] Training Epoch 188/201: 100%|██████████| 98/98 [00:09<00:00, 10.87it/s, gen_loss=1.9] Training Epoch 189/201: 100%|██████████| 98/98 [00:09<00:00, 10.89it/s, gen_loss=1.63] Training Epoch 190/201: 100%|██████████| 98/98 [00:09<00:00, 10.83it/s, gen_loss=1.72] Training Epoch 191/201: 100%|██████████| 98/98 [00:09<00:00, 10.85it/s, gen_loss=1.93] Training Epoch 192/201: 100%|██████████| 98/98 [00:09<00:00, 10.77it/s, gen_loss=1.74] Training Epoch 193/201: 100%|██████████| 98/98 [00:09<00:00, 10.78it/s, gen_loss=1.45] Training Epoch 194/201: 100%|██████████| 98/98 [00:09<00:00, 10.62it/s, gen_loss=1.72] Training Epoch 195/201: 100%|██████████| 98/98 [00:08<00:00, 10.90it/s, gen_loss=1.9] Training Epoch 196/201: 100%|██████████| 98/98 [00:09<00:00, 10.78it/s, gen_loss=1.57] Training Epoch 197/201: 100%|██████████| 98/98 [00:09<00:00, 10.81it/s, gen_loss=1.45] Training Epoch 198/201: 100%|██████████| 98/98 [00:09<00:00, 10.81it/s, gen_loss=1.76] Training Epoch 199/201: 100%|██████████| 98/98 [00:09<00:00, 10.69it/s, gen_loss=1.57] Training Epoch 200/201: 100%|██████████| 98/98 [00:09<00:00, 10.80it/s, gen_loss=1.41] Training Epoch 201/201: 100%|██████████| 98/98 [00:09<00:00, 10.82it/s, gen_loss=1.4] 100%|██████████| 19/19 [00:09<00:00, 1.99it/s]
FID: 342.0776672363281, KID: 0.5195409059524536
<Figure size 1200x600 with 0 Axes>
Observations¶
- DiffAugment seems to have improved the FID reaching a new low of 36
- Generated images seem more not bad
- Training process is very unstable and slow
- After 100 epochs the model deteriorated heavily and even with the balancing, it didnt recover and just collapsed.
Evaluation¶
def display_images(imgs, labels):
num_rows = 10
num_cols = 10
fig, axs = plt.subplots(num_rows, num_cols, figsize=(12, 12))
axs = axs.flatten()
for i in range(100):
axs[i].imshow(imgs[i])
class_index = labels[i].nonzero().item()
axs[i].set_title(CLASSES[class_index])
axs[i].axis('off')
plt.tight_layout()
plt.show()
def getClassImages(generator, num, n=100):
generator.eval()
noise = torch.normal(0, 1, (n, generator.latent_dim), device=device)
one_hot = torch.zeros(n, 10, device=device)
one_hot[:, num] = 1
generated_imgs = generator(noise, one_hot)
generated_imgs = (generated_imgs.cpu().detach().permute(0, 2, 3, 1).numpy() - (-1)) / (1 - (-1))
display_images(generated_imgs, one_hot)
for i in range(10):
getClassImages(acgan_embed.best_model[0], i)
Conclusion¶
Final Thoughts:¶
Image Quality:
Most of the images seemed to be of resonable quality. Atleast we can make up which label they are by looking at it. The model was successful in capturing the overall characteristics of an image but fails to capture the fine details. For example, for the ship, it successfully captured the sea and general shape of a ship but failes to draw the ship properly. Similiarly for the horse, it can draw the overall frame of a horse but fails at fine details like proportion and perspective. Mode collapse could be observed in many of the classes such as Dog, Cat and Bird. In Truck, Ship and Plane, although there is some diversity, there are many similiar images. I would consider Car and Truck to be our most successful generations.
What went right?
- Conditioning: Although the variety in images is not great, all the images seem to belong ot their class or atleast exhibited the characteristics
- Stability: By using our Balancing mechanism, we were able to stabilise the model training process. This helped the discirminator and generator to go back on trakc if either begins to diverge.
Limitations:
- Data Limitations: In many of the images present in the dataset, they have questionable photos such as a picture of the head of an oscrich classified as a bird or a dog curled up and not showing the face. These images, although are technically belong to their class, by showing weird perspectives they confuse the model on how a image should look like
- Model Limitations: We feel that the low variety of images could be attributed to our simpler architecture which may not capture all the necessary information.
- Mode Collapse: We feel that some mode collapse might be due to excessive conditioning done at the generator, essentially making the latent vector for same class images very similiar. Although this helps in better control over the class of an image, it compromises image variety. We feel that we didn't balance the two correctly.
- DiffAugment: Although in terms of FID, diffaugment obtained our lowest ever FID values, it didn't generate good looking images and was very slow. Each epoch using diffaugment was 4-5x slower than our normal epochs. This contrained our ability to train for more epochs due to the sheer amount of time it took to train.
How can we improve?
- More efficient FID: We had initially tries to use a library called Torch-Fidelity for our FID calculations and had fully integrated it into our code but there was a issue with the library itself. After diagnosing the problem, we found that the maintainers of the library had not properly transitioned thei code as Numpy got updates. So they were using
np.intwhich was deprecated in favour ofnp.int64andnp.int32and as a result, our lates numpy wasn't compaitble with their library. We also experimented with implementing a custom FID calculation but our method was very slow and poorly optimised as compared to TorchMetrics. - Better Conditioning: We tried to use Conditional Batch Normalisation to be used but couldn't get it to work properly. It was very buggy and crashed our training sometimes. But we are aware that it is a very powerful method.
- More control over image generation: We had explored the StyleGAN architecture but we decided that it was beyond the realm of our capabilities
- Different Loss functions: We found a few formulas pertaining to Hinge Loss and Wassersteing Loss but neither of them imrpoved our model substantially and we had to drop it in favour of the simpler BCE Loss.
- ResNet Style Architectures: Although we have a good model, we somes think that maybe the model might not be complex enough to learn the information at hand. A more complex model might be the solution. But due to computing requirements as well as time requirements, we decided that a simpler architecture is good enough and decided to focus on the training process and strategies.
Future Plans:
- Experimenting with FID: We were quite intrigued with FID from the beginning and thought of using it as our loss function but that became impractical due to the slowness of FID. But in this realm there has ben progress in the research world where in this paper researchers have found a way to calculated FID much faster and used it to backpropagate. This is something we would like to try in the future.
- Better Tuning: When training our model, we found that changing certain parameters vastly improved model performacne. We would like to explore more automated ways of doing this. One library we are look at is Optuna which we didn't have time to implement. So for this project, we mainly did tuning manually.
- Different Architectures and Losses: Right now we have just one loss function mentioned here although we technically tried 3. We want to review our implementation and fix it so that is is more effective in the future. Same for architecture, as mentioned above, we would like to try more advanced archtecture patterns to enable the model to learn more information.